Jump to content

SLI not utilizing "Shared GPU Memory" (System RAM)

bungert
Go to solution Solved by Tristerin,

Im not an expert but...

 

Discrete GPU's dont actually use your dedicated system memory.

 

 

Edit - to clarify its a Windows thing to display that, as far as I am aware, but only because iGPU's will use your actual System Memory as RAM.

 

Good evening everybody, hope you're all doing well in the run up to Christmas. 🙂

I dedcided to treat myself and get another ASUS GeForce GTX 970 Strix (matching pair) to run in my system this holiday, to keep up to speed with performance - specificially during this chip shortage too, I thought it'd be better than paying scapler prices for a new card. As a sidenote; it was used though in very good condition and I got it at a decent price too.

After doing some research, and considering the 'woes' of SLI, I concluded that getting another card would be a relatively inexpensive way of providing me with some more time with this system, before I would be required to buy a whole new platform. (Quite expensive... and not really a possability in 2021!)

 

System Specifications

I've been running the existing (single) 970 Strix at 2160p (4K) 60hz (Gsync enabled) [Acer XB281HK] for just over two years. For those who are unaware, these cards have a 3.5GB + 500MB VRAM configuration. I see some people who claim that a frame buffer this small could never push a 4K display, however (from my experience) the single card + "Shared GPU Memory" has pulled it's weight in alot of demanding titles like Grand Theft Auto V, Rust & Rainbow Six: Siege. (-at good settings too!)

For my system memory, I have a 4x8GB kit of Kingston HyperX (DDR3) [Z97 Platform (Asus Maximus VII)]) clocked at 2176Mhz. (Strange "AI Tweaker" configuration?)
My CPU is an Intel Core i5 4690K clocked at 4.48GHz (4 Cores/4 Threads).

 

Whats my problem then?

I'd just like to stomp any assumptions that some readers may make about me choosing to go with an SLI setup. I was fully aware that not all titles support SLI, (natively atleast) - and that SLI is a technology that Nvidia + developers are moving away from. Though, for the games I play and the engines they run on it is widely supported so that's why I considered it a bonus. (Plus, having another card may help me with OBS encoding though I've not experimented with that yet!)
I'm also fully aware that SLI does not double the frame buffer - I know that I'm still supposed to be getting '4GB's of Dedicated Video Memory, due to nature of how SLI works.

 

The problem is that, when I ran the single Strix 970 - it would tap into what Task Manager & 'Display Adapter Properties' describe as "Shared GPU Memory"/"Shared System Memory" respectively. Since I have 32GBs of System Memory, some research concluded that 32GB/2 = 16GBs ["Shared GPU Memory"] + '4GBs' Dedicated GPU Memory = 20GBs of GPU Memory. That is the configuration the single 970 ran in before - occasionally tapping into the shared pool if it needed to, likely to keep pushing 4K.
However - with SLI enabled the GPUs are not taking advantage of Shared GPU Memory. They both report (correctly?) in Task Manager of having 20GBs of total GPU Memory but they don't budge past 4GBs. Naturally, huge framerate drops occur when the buffer becomes full.

 

Anybody had this problem before? Does anyone elses SLI setup take advantage of Shared GPU Memory? I can't find any information anywhere on the web as it's all flooded with people who thought that "Shared GPU Memory" [In SLI] meant combining the two frame buffers of their card.

 

Thank you very much LTT Community, hopefully my first post here goes well and that your shining minds can help a fellow out.

Cheers, bungert.

 

See attatched for Configuration Screenshtos
Yes - the GPUs idle at 57c because they're in 0db fan mode.

 

adapter.png

taskmgr gpu0.png

taskmgr gpu1.png

Link to comment
Share on other sites

Link to post
Share on other sites

Im not an expert but...

 

Discrete GPU's dont actually use your dedicated system memory.

 

 

Edit - to clarify its a Windows thing to display that, as far as I am aware, but only because iGPU's will use your actual System Memory as RAM.

 

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Tristerin said:

Im not an expert but...

 

Discrete GPU's dont actually use your dedicated system memory.

 

 

Edit - to clarify its a Windows thing to display that, as far as I am aware, but only because iGPU's will use your actual System Memory as RAM.

 

Hi Tristerin, thanks for the reply.

As far as I am aware the single GPU on it's own did take advantage of the Shared System Memory - perhaps I ought to disable SLI and do some further investigating and gather some screenshots to share.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, bungert said:

Hi Tristerin, thanks for the reply.

As far as I am aware the single GPU on it's own did take advantage of the Shared System Memory - perhaps I ought to disable SLI and do some further investigating and gather some screenshots to share.

It didnt, it just looks hopeful.  I am unaware of any discrete GPU that can actually utilize RAM (in terms of gaming GPUs, not sure about super cool AI stuff)

 

The amount of latency between your system ram, and getting to the GPU would be atrocious at best

 

For an iGPU, not so much.  (EDIT - why?  PCI lanes from CPU to RAM)

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Tristerin said:

It didnt, it just looks hopeful.  I am unaware of any discrete GPU that can actually utilize RAM (in terms of gaming GPUs, not sure about super cool AI stuff)

 

The amount of latency between your system ram, and getting to the GPU would be atrocious at best

 

For an iGPU, not so much.

I've actually just done some looking into it myself and, unfortunately for my case - you do seem to be correct. Sadly, they won't take advantage of "Shared GPU Memory" (Even though I could of sworn one did!) but all sources online say otherwise.

Heyho- not to worry, suppose now I am now limited by a VRAM bottleneck. Yet another reason why you shouldn't get an SLI rig new! I should still find a way to take advantage of the other card though. Cheers.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, bungert said:

I've actually just done some looking into it myself and, unfortunately for my case - you do seem to be correct. Sadly, they won't take advantage of "Shared GPU Memory" (Even though I could of sworn one did!) but all sources online say otherwise.

Heyho- not to worry, suppose now I am now limited by a VRAM bottleneck. Yet another reason why you shouldn't get an SLI rig new! I should still find a way to take advantage of the other card though. Cheers.

FYI - in SLI, only one cards memory gets used.

 

So you do not have shared pool of 2x 4gb (or whatever the 3.5 + .5 is) you just have the top cards VRAM.  

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Tristerin said:

FYI - in SLI, only one cards memory gets used.

 

So you do not have shared pool of 2x 4gb (or whatever the 3.5 + .5 is) you just have the top cards VRAM.  

Ouch! I knew they wouldn't combine to make 8GBs but I didn't realise they'd only use 1 cards dedicated memory...!

I just assumed they duplicated the buffer as the leapfrog to make frames.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Tristerin said:

Im not an expert but...

 

Discrete GPU's dont actually use your dedicated system memory.

 

 

Edit - to clarify its a Windows thing to display that, as far as I am aware, but only because iGPU's will use your actual System Memory as RAM.

 

Tristerin is completely correct. Dedicated GPUs don't take advantage of the "Shared GPU Memory", a shame really as I A) Mistakenly thought they did B) I have a surplus of system memory
My solution will be to keep settings down below 4GBs of VRAM usage to maintain framerates - which seems really obvious know I think about it.


Thank you all for the clarification.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, bungert said:

Tristerin is completely correct. Dedicated GPUs don't take advantage of the "Shared GPU Memory", a shame really as I A) Mistakenly thought they did B) I have a surplus of system memory
My solution will be to keep settings down below 4GBs of VRAM usage to maintain framerates - which seems really obvious know I think about it.


Thank you all for the clarification.

Appreciate the mark on the post - FYI I still Xfire (dont have duplicate NVIDIA cards) and its worth it.  Its fun, you learn, and sometimes - get tons more performance.

 

Enjoy the hunt!  Just know, there are multiple profiles for each game (at least for Xfire) that you should try when using SLI.

 

I.E. - game says it doesnt SLI - go to REDDIT.  There are PROFILES made that can leverage some, if not all of SLI in alot of games.

EDIT - there are certain game engines, however, that CANNOT utilize SLI

 

Sometimes it doesnt work and performance dips.  Sometimes it changes nothing.  Sometimes you get that Xfire badge (for me) and go woooohoooo at your old hardware chugging away 🙂

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Tristerin said:

Appreciate the mark on the post - FYI I still Xfire (dont have duplicate NVIDIA cards) and its worth it.  Its fun, you learn, and sometimes - get tons more performance.

 

Enjoy the hunt!  Just know, there are multiple profiles for each game (at least for Xfire) that you should try when using SLI.

 

I.E. - game says it doesnt SLI - go to REDDIT.  There are PROFILES made that can leverage some, if not all of SLI in alot of games.

EDIT - there are certain game engines, however, that CANNOT utilize SLI

 

Sometimes it doesnt work and performance dips.  Sometimes it changes nothing.  Sometimes you get that Xfire badge (for me) and go woooohoooo at your old hardware chugging away 🙂

Yup - going to have some fun now tuning in my OC again (woo!) and certainly going to have to find ways to get these games that don't natively support SLI (*cough* BeamNG.drive) to play ball.

Once again - thanks for your help, fingers crossed I can dial in the settings *just* right again for each game.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/16/2021 at 9:54 AM, Tristerin said:

Im not an expert but...

 

Discrete GPU's dont actually use your dedicated system memory.

 

 

Edit - to clarify its a Windows thing to display that, as far as I am aware, but only because iGPU's will use your actual System Memory as RAM.

 

That's actually not correct. The graphics driver actually has a page file (except its based in RAM instead of the on the disk) much like regular system RAM. It's used when the contents of the video memory become too full. Any new memory allocations would be put into that page file instead of video memory. 

 

Here's a snippet from a small C++ program I cobbled together that uses the DXGI (DirectX Graphics Infrastructure) libraries to enumerate, and query graphics adapters (cards) for their memory specifications.  

Spoiler

image.thumb.png.4dd09364364ae6f1502bf6ef38f086e8.png

 

Here is the definitions of those memory area's taken from MSDN. https://docs.microsoft.com/en-us/windows/win32/api/dxgi/ns-dxgi-dxgi_adapter_desc

Quote

DedicatedVideoMemory

Type: SIZE_T

The number of bytes of dedicated video memory that are not shared with the CPU.

DedicatedSystemMemory

Type: SIZE_T

The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.

SharedSystemMemory

Type: SIZE_T

The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.

 

Source for the program. (Note I didnt actually make any of this since it's simply faster to copy pasta from MSDN. I just threw it into my IDE, added the necessary project settings and compiled it.)

Spoiler

#include <dxgi.h>
#include <Windows.h>
#include <iostream>
#include <assert.h>

#define WIN32_LEAN_AND_MEAN

#pragma comment(lib, "dxgi.lib")



template <class T> void SafeRelease(T** ppT)
{
    if (*ppT)
    {
        (*ppT)->Release();
        *ppT = NULL;
    }
}

void EnumerateUsingDXGI(IDXGIFactory* pDXGIFactory)
{
    assert(pDXGIFactory != 0);

    for (UINT index = 0; ; ++index)
    {
        IDXGIAdapter* pAdapter = nullptr;
        HRESULT hr = pDXGIFactory->EnumAdapters(index, &pAdapter);
        if (FAILED(hr)) // DXGIERR_NOT_FOUND is expected when the end of the list is hit
            break;

        DXGI_ADAPTER_DESC desc;
        memset(&desc, 0, sizeof(DXGI_ADAPTER_DESC));
        if (SUCCEEDED(pAdapter->GetDesc(&desc)))
        {
            wprintf(L"\nDXGI Adapter: %u\nDescription: %s\n", index, desc.Description);

            for (UINT iOutput = 0; ; ++iOutput)
            {
                IDXGIOutput* pOutput = nullptr;
                hr = pAdapter->EnumOutputs(iOutput, &pOutput);
                if (FAILED(hr)) // DXGIERR_NOT_FOUND is expected when the end of the list is hit
                    break;

                DXGI_OUTPUT_DESC outputDesc;
                memset(&outputDesc, 0, sizeof(DXGI_OUTPUT_DESC));
                if (SUCCEEDED(pOutput->GetDesc(&outputDesc)))
                {
                    wprintf(L"hMonitor: 0x%0.8Ix\n", (DWORD_PTR)outputDesc.Monitor);
                    wprintf(L"hMonitor Device Name: %s\n", outputDesc.DeviceName);
                }

                SafeRelease(&pOutput);
            }

            wprintf(
                L"\tGetVideoMemoryViaDXGI\n\t\tDedicatedVideoMemory: %Iu MB (%Iu)\n\t\tDedicatedSystemMemory: %Iu MB (%Iu)\n\t\tSharedSystemMemory: %Iu MB (%Iu)\n",
                desc.DedicatedVideoMemory / 1024 / 1024, desc.DedicatedVideoMemory,
                desc.DedicatedSystemMemory / 1024 / 1024, desc.DedicatedSystemMemory,
                desc.SharedSystemMemory / 1024 / 1024, desc.SharedSystemMemory);
        }

        SafeRelease(&pAdapter);
    }
}

int main()
{
    IDXGIFactory* pFactory;
    HRESULT hr = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)(&pFactory));

    EnumerateUsingDXGI(pFactory);

    pFactory->Release();
    pFactory = nullptr;

	return 0;
}

 

EDIT: I should clarify that if you had an IGPU you would see a non zero positive number for Dedicated System Memory. Currently my GTX 1080 doesn't need any system memory so it doesn't have any "dedicated" but it does have 16GB of memory available to it. That 2nd device I believe would be a software renderer provided by the system...not too sure on that one.

 

EDIT 2: I would have to go through my textbooks but I believe an application may be able to excert control over shared memory. So an application may potentially avoid using system memory since the performance hit would be massive since it would involve memory mapping the GPU and waiting for the GPU and CPU to synchronize so that they can communicate causing a massive stall from the access times and overall slow speeds of system memory.

 

Edit3: As far as SLI not being able to utilize shared memory I would probably guess the driver doesn't allow it since it's probably not feasible to keep 2 cards mirrored in addition to system memory.

 

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

Hi everyone,

I just *knew* that I recalled my GPU used Shared Memory outside of SLI. In some titles, I was seeing worse performance with SLI enabled because of a clear frame buffer bottleneck. So, I flicked back into single GPU rendering mode and here are the results:

image.png.7f60e711021e975330ad50ad0d0ca714.png

I'll admit, for the sake of demonstration this example is from the Rainbow Six: Siege benchmark, however still an accurate representation of the hardware demands the game makes. You can clearly see that it is taking 3.2GBs of system ram and combining it with the frame buffer. Average FPS (according to benchmark breakdown) was 50.

To be clear, since I run at 4K I choose not to tank any of my games with MSAA or any sort of render-scaling options as aliasing isn't really a problem for me on this display.

 

I'll run the benchmark again in SLI mode if you'd like to see it capping at 4GBs of "GPU Memory" usage.

Thoughts @Tristerin?

 

@trag1cyour thrid edit seems to be the plausable answer to this problem - a driver issue? Such a shame.

Thank you all for the help so far, massively appreciated.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, bungert said:

 

 

@trag1cyour thrid edit seems to be the plausable answer to this problem - a driver issue? Such a shame.

Thank you all for the help so far, massively appreciated.

It's not so much a driver issue but more so a massive technical challenge that tbh I don't think is solvable in way that you can actually benefit from. Under normal circumstances using Shared Memory is a massive hit to frame times but trying to do that for 2 or more GPUs is probably damn near impossible to do since both cards would have to be Memory mapped at different times sequentially causing stalls in the execution of both the CPU and GPU. You would have to map one card do work and then unmap and then do the same for the other.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×