Jump to content

Dual GPU cards may finally be able to combine their VRAM.

Assassin

Mantle only + needs to be optimized by game devs.

 

So there will probably be 1 or 2 games next to battlefield 5 which supports the technology and that's it. Not so excited....

If DirectX 12 could do this, i would be soooo happy.

They're talking about the influences of mantle on the industry. Basically saying that the fact they can make mantle achieve this means they want the entire industry to be influenced and also be able to achieve it across multiple APIs. Also, it would be a feature of the API that should be inherent in applications that utilise the API.

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

My guess is this has something to do with how Mantle crossfire works.

 

In Beyond Earth mantle crossfire uses a different rendering technique than AFR (I believe it's similar to SuperTiling described here http://techreport.com/review/8826/ati-crossfire-dual-graphics-solution/3) and I suspect that if you do this properly it may be possible to not require both cards to have the same framebuffer contents.  The reason it typically isn't used is that it's more complicated to implement, but maybe Mantle/DX12 remove enough overhead on drawcalls to make it feasible instead of AFR.  

 

It might be that they are using HSA method of communication by which they pass along pointers to memory segments instead of copying data so GPU 1 could access data on a memory chip of GPU 2 and the other way around. 

Link to comment
Share on other sites

Link to post
Share on other sites

I find it amusing how many people think this applies to SLI/Crossfire configurations. This is for dual GPU based cards such as the R9 295x2 and HD 7990.

 

Instead of consuming both memory spaces for the same resources the developer can optimize it as a single memory pool.

I was being hopeful more than anything, but if it only applies to a dual GPU card, then they just got alot more appealing if this gets adopted in DX12 games.  

AMD Ryzen 5900x, Nvidia RTX 3080 (MSI Gaming X-trio), ASrock X570 Extreme4, 32GB Corsair Vengeance RGB @ 3200mhz CL16, Corsair MP600 1TB, Intel 660P 1TB, Corsair HX1000, Corsair 680x, Corsair H100i Platinum

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why is it amd that always pushes for more innovation while invidia looks for ways to just be effecient :/ like really come on they did get sound through hdmi and now they want to put freesync on displayport come on invidia do some new stuff besides gameworks

Nvidia gave us G-sync. If it weren't for that, how much interest would  AMD have in free-sync.

Oh, and Nvidia cards ships with HDMI 2.0

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds awesome! This is the kind of technology we need with people pushing higher resolutions/ game engines becoming more and more advanced. I can imagine Star Citizen at 4K Ultra settings will need at least 8GB of vram.

CPU: i7 6700k @ 4.6ghz | CASE: Corsair 780T White Edition | MB: Asus Z170 Deluxe | CPU Cooling: EK Predator 360 | GPU: NVIDIA Titan X Pascal w/ EKWB nickel waterblock | PSU: EVGA 850w P2 | RAM: 16GB DDR4 Corsair Domintator Platinum 2800mhz | Storage: Samsung 850 EVO 500GB | OS: Win 10 Pro x64 | Monitor: Acer Predator X34/HTC VIVE Keyboard: CM Storm Trigger-Z | Mouse: Razer Taipan | Sound: Audio Technica ATH-M50x / Klipsch Promedia 2.1 Sound System 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia gave us G-sync. If it weren't for that, how much interest would  AMD have in free-sync.

Oh, and Nvidia cards ships with HDMI 2.0

Oh you mean that thing that really doesnt work and brings up the price of a monitor up to 2x its normal price also amd cards are old we are barely getting hdmi 2.0 on them in the next release

Link to comment
Share on other sites

Link to post
Share on other sites

Errrr, dumb question:How do they get around the inherent fact that the VRAM pool of one GPU isn't connected to the other GPU? I mean, sure the second GPU can access the first one's memory, but how can it do it without the inherent latency being a problem?

i7 not perfectly stable at 4.4.. #firstworldproblems

Link to comment
Share on other sites

Link to post
Share on other sites

so my 2GB 770 can sli without being pointless? :D

- i7-2600k @ 4.7GHz - MSI 1070 8GB Gaming X - ASUS Maximus V Formula AC3 Edition - 16GB G.SKILL Ripjaws @ 1600Mhz - Corsair RM1000 - 1TB 7200RPM Seagate HDD + 2TB 7200 HDD + 2x240GB M500 RAID 0 - Corsair 750D - Samsung PX2370 & ASUS ROG SWIFT -

Link to comment
Share on other sites

Link to post
Share on other sites

I hope AMD makes this proprietary, maybe locking it to Mantle.

 

Have a kick at Nvidia's proprietary butt.

 

No thanks, open standers plz. even if its one of those things that ends up performing better on amd cards (like mantle would presumably run better on an AMD gpu vs a NVIDIA one, when/if we get to that point).

 

No locked down bullshit. 

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

Aww it'd be great for once to give Nvidia a taste of its own medicine.

 

Actually if they make Mantle completely open and then bake the tech into it...that would force nvidia to take a closer look at building something on mantle. Or they'll just build there own system in response. Honestly the tech will come to both sides of the fence eventually, i'd just prefer not to segment the market more than it already is over arbitrary in game features. I just think if they baked it into something of their own, even if 100% open it'll make nvidia down create thar version just because pure ridiculous pride wouldnt let them use anything built by AMD.

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

Aww it'd be great for once to give Nvidia a taste of its own medicine.

it would help Nvidia more than hurt them. AMD is not in a good financial spot and needs more adoption, so they need new tech to be open source so they can get free RnD. also its a API thing, so it can be in mantle and Microsoft can put it in DX12, also mantle is open to other companies to use too.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

it would help Nvidia more than hurt them. AMD is not in a good financial spot and needs more adoption, so they need new tech to be open source so they can get free RnD. also its a API thing, so it can be in mantle and Microsoft can put it in DX12, also mantle is open to other companies to use too.

 

Even if its open though, or like mentioned before baked into mantle which is open, i highly doubt Nvidia would touch it at all purely out of a pride thing.

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

Even if its open though, or like mentioned before baked into mantle which is open, i highly doubt Nvidia would touch it at all purely out of a pride thing.

if its added to DX12, Nvidia has no say if DEVs use it or not.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

I think this should be pointed out from the article:

"just because you are running Windows 10 and DX12 and/or Mantle API does not mean that your multiple GPU configuration is now stacking memory. The capability is present in these low overhead APIs but they will not come into affect until devs specifically optimize the game as such."

So there will be 2 games a year that will stack the vram. Yaaaay, many people excited after hearing the news but when it depemds on publishers its just a dissapointment because they don't care.

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

Who ever said that the 970 doesn't have 4GB of usuable vram?

I was joking man, just trying to have a little fun from the misconceptions happening recently haha.

 

Where do you live so I can punch you in the face?

 

I'm joking but still, not funny

Southern California, come fight me! I'll defend my right to joke around!

 

I'm joking but still, learn to laugh :P

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't they already try this before, back when AFR and SFR were alternative dual GPU technologies and ultimately SFR performed worse than a single GPU, hence it being abandoned?

 

If this works and is efficient, it can't be a bad thing. And if it ends up giving value to dual mid-range GPUs like 760s and 270Xs, high end gaming could become an awful lot cheaper so that'd be welcome too.

 

 

I was joking man, just trying to have a little fun from the misconceptions happening recently haha.

 

Southern California, come fight me! I'll defend my right to joke around!

 

I'm joking but still, learn to laugh  :P

 
A lot of people on this forum genuinely believe what you said is true. I think that's the issue that people are having.
Link to comment
Share on other sites

Link to post
Share on other sites

So you could have more or the same amount of VRAM as system RAM and it wouldn't even be that ridiculously unbalanced

I make Rainmeter things and other art :D

Link to comment
Share on other sites

Link to post
Share on other sites

Now if they also allowed people to offload task other than PhysX to a second GPU, I'll be happy. Without using SLI, I see enormous performance boosts in PhysX games just by having my GTX 650Ti (2GB OC version) set as the CUDA card.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Now, how good is that extra ram.

What?

CPU: AMD FX-6300 4GHz @ 1.3 volts | CPU Cooler: Cooler Master Hyper 212 EVO | RAM: 8GB DDR3

Motherboard: Gigabyte 970A-DS3P | GPU: EVGA GTX 960 SSC | SSD: 250GB Samsung 850 EVO

HDD: 1TB WD Caviar Green | Case: Fractal Design Core 2500 | OS: Windows 10 Home

Link to comment
Share on other sites

Link to post
Share on other sites

I was joking man, just trying to have a little fun from the misconceptions happening recently haha.

 

Southern California, come fight me! I'll defend my right to joke around!

 

I'm joking but still, learn to laugh :P

Jokes been overused to hell.

Link to comment
Share on other sites

Link to post
Share on other sites

i always wondered why they havent done this before. i mean it just turning it into a shared memory pool rather then 2 duplicated sets. the obliviously a lot of harder things to overcome. but strange its never been done before.

I think it's at least partially to do with the bandwidth available via the SLI/CFX bridge. Now that AMD pushes the data via the PCI-E lanes that is no longer an issue.

Mantle only + needs to be optimized by game devs.

So there will probably be 1 or 2 games next to battlefield 5 which supports the technology and that's it. Not so excited....

If DirectX 12 could do this, i would be soooo happy.

It is something that is more likely to be leveraged by workstation applications such as Photoshop or CAD.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of people on this forum genuinely believe what you said is true. I think that's the issue that people are having.

Well only 2 people have called me out on it, but nearly 35 people have "liked" it, so I'm assuming the 35 of them got the joke.

 

Jokes been overused to hell.

I haven't seen anyone joke over it?  :huh: If there have been then I am sorry, don't mean to overuse a joke, at least I didn't say Keep on Diggin'  :lol:

Link to comment
Share on other sites

Link to post
Share on other sites

Non-mirrored VRAM isn't the only cool thing you can do with mGPU in Mantle. You can also create an asymmetric CrossFire configuration between GPUs of substantially different performance envelopes, then load-balance the workloads between them so each card runs at peak performance. For example, you could have the dGPU do geometry and lighting, where an APU can do physics and AI work.

 

You can also asynchronously schedule compute and graphics workloads at very fine granularity, which is tremendously helpful to the latency of a VR solution. 

 

You could do split-frame rendering, which treats mGPU as a single faster GPU. No AFR latency penalty, no queue depth. In every way it will behave like one GPU with the resources of two.

 

Mantle can do its own command buffer prediction, where the command buffer is the stream of work sent to the CPU by the game, which the CPU reorders for the GPU. Typically the CPU would have to calculate scenarios for the command buffer, working ahead of the game to predict what will happen for peak performance. All of this ahead-of-time calculation has to be shared with the GPU, with the GPU replying back with what it's currently doing. You can eliminate all of that round trip performance junk if the GPU is able to make its own estimations based on the knowledge is has of its current rendering state. This applies to any sort of graphics config if the developer chooses to use it.

 

You can do per-eye rendering with mGPU, with each GPU giving full performance of a GPU to one eye in a VR or stereo3D configuration.

 

There's so much more lurking in that API that is so far beyond what we think of as "rules" in the PC industry about how CPUs and GPUs should work together, or MGPU must be done.

 

I'm pleased to see people are interested in this kind of stuff in Mantle.

Robert Hallock

Global Head of Technical Marketing

Advanced Micro Devices, Inc.

Link to comment
Share on other sites

Link to post
Share on other sites

Non-mirrored VRAM isn't the only cool thing you can do with mGPU in Mantle. You can also create an asymmetric CrossFire configuration between GPUs of substantially different performance envelopes, then load-balance the workloads between them so each card runs at peak performance. For example, you could have the dGPU do geometry and lighting, where an APU can do physics and AI work.

 

You can also asynchronously schedule compute and graphics workloads at very fine granularity, which is tremendously helpful to the latency of a VR solution. 

 

You could do split-frame rendering, which treats mGPU as a single faster GPU. No AFR latency penalty, no queue depth. In every way it will behave like one GPU with the resources of two.

 

Mantle can do its own command buffer prediction, where the command buffer is the stream of work sent to the CPU by the game, which the CPU reorders for the GPU. Typically the CPU would have to calculate scenarios for the command buffer, working ahead of the game to predict what will happen for peak performance. All of this ahead-of-time calculation has to be shared with the GPU, with the GPU replying back with what it's currently doing. You can eliminate all of that round trip performance junk if the GPU is able to make its own estimations based on the knowledge is has of its current rendering state. This applies to any sort of graphics config if the developer chooses to use it.

 

You can do per-eye rendering with mGPU, with each GPU giving full performance of a GPU to one eye in a VR or stereo3D configuration.

 

There's so much more lurking in that API that is so far beyond what we think of as "rules" in the PC industry about how CPUs and GPUs should work together, or MGPU must be done.

 

I'm pleased to see people are interested in this kind of stuff in Mantle.

Would all of these features work on the R9 290X? Because I might be replacing my faulty GTX 970 with one after the RMA is sorted.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×