Posted June 4, 2015 · Original PosterOP PC gaming is about to see a rather drastic change and improvement in the way multi-GPUs are implemented. No longer will you be forced to pair dGPUs from the same architecture/family or even brand. The ability to combine any two or more GPUs and have their resources (GPU cores, Vram etc.) stacked and pooled is a thing and will be possible with DX12. Whether you combine Intel or AMD iGPUs with any dGPU or Nvidia and AMD dGPUs, it won't matter. Their resources can be shared for additional gaming performance. No longer will the iGPU in your system go to waste. This is awesome news for all of us. I know this has been known as a possibility with DX12 for some time, but this is the first time I've seen a demo of it and a game dev talking about how it can be implemented. The demo in the video is using one of AMDs latest APUs paired with an R9 290X - an unlikely and formerly thought of "unbalanced" combination. But now with DX12, we not only have reduced CPU overhead - allowing this CPU/dGPU combo to actually perform optimally, but we also have the pairing of the APUs iGPU added to the total GPU capabilities of this particular system for even more performance. As mentioned, this is not exclusive to AMD hardware, they just happened to be running it on an AMD system. With the latest APUs having such strong iGPU performance, it was a good platform to demonstrate the effectiveness this can have. Currently, the way multi-GPUs are combined is that each GPU (in a dual-card config) takes turn rendering each frame. Some of the disadvantages with this is that both GPUs have to be nearly identical, run at the same clock speeds and both have to hold the same data in the Vram of each card resulting in redundancies and limitations on which cards can be used/combined. The new approach made possible with DX12 is that now the devs can have control over which aspects of the scene are being rendered by which GPU. So you can have the high-end dGPU still doing most of the heavy-lifting, rendering the majority of the scene, while the iGPU takes on the remaining tasks/aspects of rendering the scene. This means the dGPU doesn't have to render the entire scene by itself, only part of it, which as you can guess, will increase frame rates. As stressed in the video, it will be up to the devs of each game to optimize and sort out which aspects/tasks can be offloaded to an iGPU and how the work will be divided. One key thing to note is that this will allow for much better crossfire/SLI or hybrid dGPU combos to be possible as the GPUs will not be taking turns rendering each frame, but each frame will be rendered in concert with each GPU taking on specific elements of the scene. This will also allow for Vram stacking, so two cards with 4GB of Vram each will amount to a total of 8GB Vram available. One card can handle all textures, while the other handles lighting etc. The potential is there, but again, it will be strongly dependant on the devs to implement it. Here's the demo video: https://www.youtube.com/watch?v=9cvmDjVYSNk Praise the sun! \[T]/ My Systems: Spoiler Spoiler GAMING PC: i5-4570 // Deepcool Gabriel with TF120 fan // Asus Z97-E // Kingston Hyper X Black 2x4GB 1600 // Asus GTX 980 Strix OC @1459 // Samsung 840 120GB SSD // 2x Seagate Barracuda 1TB HDDs // EVGA Supernova 850GS // NZXT S340 White // 26" Acer (1920x1200) // 24" Asus (1920x1080) // Corsair Vengeance K70 MX Reds // Razer Naga Epic // Logitech G27 // Windows 10 Spoiler HTPC (old): Athlon 5350 APU // ASRock AM1H-ITX // AMD ES 1x4GB 1600 // PNY GeForce GT 730 1GB GDDR5 // AData SP600 128GB SSD // Silverstone ST45SF 450w 80+ Bronze // Silverstone ML05B // // Ubuntu Gnome 15.10 Spoiler MAIN LAPTOP: Acer E5-772G-59VG // 17.3" 1600x900 // i5-5200U 2.2GHz Dual-Core HT // Geforce 940m 2GB // 2x4GB DDR3L // Mushkin 480GB SSD // Windows 10 OLD LAPTOP: HP Pavilion dv5 // 14" 1366x768 // AMD Turion II P540 2.4GHz Dual-Core // Radeon HD 4250 1GB // 2x2GB RAM // 500GB 7200 HDD // Ubuntu 15.10 Spoiler STALKER v2.0 (HTPC): Pentium G3258 @3.8 // Gigabyte GA-H87N-Wifi // AData XPG 2x4GB 1600 // Asus GTX 460 1GB GDDR5 // AData SP600 128GB SSD // Cooler Master V650 Gold // Custom Built mATX chassis // 55" LG LED 1080p TV // Logitech wireless touchpad-keyboard // Ubuntu Gnome 16.04 // Build Log Link to post Share on other sites
Posted June 4, 2015 What about 'crossfiring' Intel onboard with a dedicated GPU? Spoiler Prometheus (Main Rig) CPU-Z Verification Laptop: Spoiler Intel Core i3-5005U, 8GB RAM, Crucial MX 100 128GB, Touch-Screen, Intel 7260 WiFi/Bluetooth card. Phone: Spoiler TruPureX, Xposed, Moto 360 Gen. 1. Game Consoles: Spoiler Softmodded Fat PS2 w/ 80GB HDD, and a Dreamcast. If you want my attention quote my post, or tag me. If you don't use PCPartPicker I will ignore your build. Link to post Share on other sites
Posted June 4, 2015 "GOOD NEWS EVERYONE!" Computer's don't make errors. What they do, they do on purpose. By now your name and particulars have been fed into every laptop, desktop, mainframe and supermarket scanner that collectively make up the global information conspiracy, otherwise known as The Beast. You just be careful. Computers have already beaten the Communists at chess. Next thing you know, they'll be beating humans. Link to post Share on other sites
Posted June 4, 2015 · Original PosterOP What about 'crossfiring' Intel onboard with a dedicated GPU? In the video they say it should be possible with any iGPU (integrated/onboard GPU) and dGPU (discreet GPU) combination. It's not brand-specific and the only limitations with be how powerful each GPU is. Praise the sun! \[T]/ My Systems: Spoiler Spoiler GAMING PC: i5-4570 // Deepcool Gabriel with TF120 fan // Asus Z97-E // Kingston Hyper X Black 2x4GB 1600 // Asus GTX 980 Strix OC @1459 // Samsung 840 120GB SSD // 2x Seagate Barracuda 1TB HDDs // EVGA Supernova 850GS // NZXT S340 White // 26" Acer (1920x1200) // 24" Asus (1920x1080) // Corsair Vengeance K70 MX Reds // Razer Naga Epic // Logitech G27 // Windows 10 Spoiler HTPC (old): Athlon 5350 APU // ASRock AM1H-ITX // AMD ES 1x4GB 1600 // PNY GeForce GT 730 1GB GDDR5 // AData SP600 128GB SSD // Silverstone ST45SF 450w 80+ Bronze // Silverstone ML05B // // Ubuntu Gnome 15.10 Spoiler MAIN LAPTOP: Acer E5-772G-59VG // 17.3" 1600x900 // i5-5200U 2.2GHz Dual-Core HT // Geforce 940m 2GB // 2x4GB DDR3L // Mushkin 480GB SSD // Windows 10 OLD LAPTOP: HP Pavilion dv5 // 14" 1366x768 // AMD Turion II P540 2.4GHz Dual-Core // Radeon HD 4250 1GB // 2x2GB RAM // 500GB 7200 HDD // Ubuntu 15.10 Spoiler STALKER v2.0 (HTPC): Pentium G3258 @3.8 // Gigabyte GA-H87N-Wifi // AData XPG 2x4GB 1600 // Asus GTX 460 1GB GDDR5 // AData SP600 128GB SSD // Cooler Master V650 Gold // Custom Built mATX chassis // 55" LG LED 1080p TV // Logitech wireless touchpad-keyboard // Ubuntu Gnome 16.04 // Build Log Link to post Share on other sites
Posted June 4, 2015 AHhhhhhhhh I wish they'd release it for hardcore people who want to test the APIs! They could have a disclaimer saying this could brick your card and everything, it'd be fine! Crowdsource your testing, companies! I'm the forum's Internet Fox Loli, and I'm going to hug you.My rig: Intel Core i7 2600K - Cooler Master Seidon 240M - ASRock Extreme9 Z77 - 16GB 1333MHz DDR3 (Patriot) - AMD Radeon 7970 DirectCU II - Radeon 7770 (for my other two monitors) - Adata SP900 128GB SSD - OCZ Trion 100 240GB SSD - Hitachi 250GB HDD - Antec Earthwatts Platinum 650W - Heavily modded white Fractal Design Define R4 Link to post Share on other sites
Posted June 4, 2015 We can finally mix and match different GPU's??? I really hope devs take advantage of this. This is f'n awesome! Current Rig: i5 3570K @ 4.2 GHz, Corsair H100i, Asus Maximus V Gene, Asus GTX 970 Turbo, Patriot Viper Black Mamba 8 GB DDR-3 1600 (2x4GB), Sandisk 120 GB SSD + Seagate 500 GB HDD + WD Blue 1 TB HDD, XFX 750W Black Edition, Corsair 350D - https://ca.pcpartpicker.com/b/cbxrxr Link to post Share on other sites
Posted June 4, 2015 @MEC-777 second time that it's been demo'd Linus Sebastian said: The stand is indeed made of metal but I wouldn't drive my car over a bridge made of it. https://youtu.be/X5YXWqhL9ik?t=552 Link to post Share on other sites
Posted June 4, 2015 And here I was thinking someone SLIFired some thing. Link to post Share on other sites
Posted June 4, 2015 In the video they say it should be possible with any iGPU (integrated/onboard GPU) and dGPU (discreet GPU) combination. It's not brand-specific and the only limitations with be how powerful each GPU is. And if the developer can be bothered developing with this in mind. The potential for extremely bad implementations of this is high Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Gigabyte Windforcex3 HD 7950 | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Corsair Vengeance K90 | Samson SR 850 | Logitech G502 Link to post Share on other sites
Posted June 4, 2015 I wonder if this will improve dual card frame latency, because as it is now, at 50fps with 2 cards, you're actually getting 25fps equivalent input latency, if not more. System specsGraphics card: Asus GTX 980 Ti (Temp target: 60c, fan speed: slow as hell)CPU: Intel 6700k @ 4.2GhzCPU Heatsink: ThermalRight Silver Arrow ExtremeMotherboard: Asus Maximus Viii GeneRam: 8GB of DDR4 @ 3000MhzHeadphone source: O2 + Odac Mic input: Creative X-Fi Titanium HDCase: Fractal Design Arc midi R2Boot Drive: Samsung 840 Pro 128GB Storage: Seagate SSHD 2TBPSU: Be quiet! Dark Power Pro 550wPeripheralsMonitor: Asus ROG Swift PG278QMouse: Razer DeathAdder Chroma (16.5 inch/360)Mouse surface: Mionix Sargas 900Tablet: Wacom Intuos PenKeyboard: Filco Majestouch Ninja, MX Brown, Ten Keyless Headphones: AKG K7xxIEMs: BrainWavs S1 Link to post Share on other sites
Posted June 4, 2015 Ive been dreaming for something like this ever since i learner what an SLi/xfire is, and im still skeptical about it, companies are just retarded man,instead of advancing and creating insane new experiences using hardware that we already have in new ways they block it, like nvidia physx,just imagine every single game that has gpu physx and shared pool memory and offloads other stuff to iGpu or a 2nd gpu, i have a gtx 460 laying around and an hd 4600 igpu now i can only imagine the 3 combined. Anyway i think the iGPU co-processing is still nonesense,i still see iGPUs useless and toxic,they need to go away from dedicated desktop pc's.Think about it for a second theres no way an iGPU that uses slow DDR3 -4 RAM can keep up with 7GHZ gddr5 or HBM,its not even close in bus width to a gpu, and HBM with 1024 bit bus its going to get even more crazy.Just let the iGPU's go we dont need em,all we need is to be able to use cross-vendor and cross-gen gpu's.Just the tought of buying a future hbm amd card and offloading some stuff to my old gtx 670 makes me "wet" ,not having to throw away a good card. Link to post Share on other sites
Posted June 4, 2015 Fuck hype over SWBF, BAK or any other game.This is real hype, can't wait to see games scale across multiple CPU cores and multiple GPUs no matter what brand!!!!!Its only been 8 years since the first X86 based dual core CPU launched.HYPE!!!!! I7 6700K @ 4.8Ghz | MSI Z170A Gaming M7 | 16GB Corsair Vengeance LPX 3000Mhz | Samsung SM951 512GB NVMe | 2 x MSI GTX 970 SLI | Full Alphacool Custom Water Loop | 2 x Sandisk Ultra Plus 256GB SSDs | WD Black 1TB | WD Green 4TB | Corsair AX760I | Fractal Define XL R2.0 | Acer XB280HK 4K G Sync | Pioneer BDR-209EBK BDXL | Link to post Share on other sites
Posted June 4, 2015 so going to have a 770, 640, and 4600 powering through games. GLaDOS: i5 6600 EVGA GTX 1070 FE EVGA Z170 Stinger Cooler Master GeminS524 V2 With LTT Noctua NFF12 Corsair Vengeance LPX 2x8 GB 3200 MHz Corsair SF450 850 EVO 500 Gb CableMod Widebeam White LED 60cm 2x Asus VN248H-P, Dell 12" G502 Proteus Core Logitech G610 Orion Cherry Brown Logitech Z506 Sennheiser HD 518 MSX Lenovo Z40 i5-4200U GT 820M 6 GB RAM 840 EVO 120 GB Droid Maxx 2 G.Skill 32 GB Micro SD Otterbox Commuter Lightning Knight Tempered Glass Link to post Share on other sites
Posted June 4, 2015 Inb4 PSU makers (and MBO to some degree) earning a lot of money. The ability to google properly is a skill of its own. Link to post Share on other sites
Posted June 4, 2015 Great stuff. Something we can all cheer about regardless of fanboy allegiances. These oxide guys always seem to be on the cutting edge of graphics APIs. First with mantle, then with DX12, then with Vulkan.I hope their game will be as good as their tech LOL. Link to post Share on other sites
Posted June 4, 2015 This makes sense on why Nvidia are so hyping NVLink They need to make a version of SLI so much better than DX12 shared resources in order to encourage people to just buy Nvidia cards. CPU: Intel Core i7-4790K - CPU Cooler: Cryorig R1 Universal - Motherboard: MSI Z97 MPOWER - RAM: 2 x 8GB Kingston HyperX Fury 1866MHz - GPU: EVGA GTX 980 ACX 2.0 - Case: Fractal Design Arc Midi R2Storage: 250GB Samsung 840 EVO - PSU: EVGA SuperNOVA G2 750W - Mouse: Logitech G502 Proteus Core - Keyboard: Corsair K70 Cherry MX Brown - Headphone: Audio Technica ATH-M50X / Sennheiser HD600 - Monitor: Dell U2414H Link to post Share on other sites
Posted June 4, 2015 They need to make a version of SLI so much better than DX12 shared resources in order to encourage people to just buy Nvidia cards.What oxide games showed will work on nvidia too. They just demoed it on AMD because that's what they have got working at the moment. But it's not a vendor exclusive thing...It's not really like SLI or cross-fire, it's about have a discrete GPU share the load with your onboard GPU. Link to post Share on other sites
Posted June 4, 2015 What oxide games showed will work on nvidia too. They just demoed it on AMD because that's what they have got working at the moment. But it's not a vendor exclusive thing...It's not really like SLI or cross-fire, it's about have a discrete GPU share the load with your onboard GPU. The point I'm making is that with this, it makes SLI effectively redundant. NVLink is supposed to be SLI on steroids and I'm guessing for Nvidia hardware better than this implementation. If Nvidia want to push you buying two new matched cards for SLI they need to offer tangible benefits to this DX12 implementation. NVLink is their plan. CPU: Intel Core i7-4790K - CPU Cooler: Cryorig R1 Universal - Motherboard: MSI Z97 MPOWER - RAM: 2 x 8GB Kingston HyperX Fury 1866MHz - GPU: EVGA GTX 980 ACX 2.0 - Case: Fractal Design Arc Midi R2Storage: 250GB Samsung 840 EVO - PSU: EVGA SuperNOVA G2 750W - Mouse: Logitech G502 Proteus Core - Keyboard: Corsair K70 Cherry MX Brown - Headphone: Audio Technica ATH-M50X / Sennheiser HD600 - Monitor: Dell U2414H Link to post Share on other sites
Posted June 4, 2015 The point I'm making is that with this, it makes SLI effectively redundant. NVLink is supposed to be SLI on steroids and I'm guessing for Nvidia hardware better than this implementation. If Nvidia want to push you buying two new matched cards for SLI they need to offer tangible benefits to this DX12 implementation. NVLink is their plan. I don't see NVlink coming to consumer boards. The NVlink interconnect is made to replace PCIe but intel probably won't make it work with their chipset Link to post Share on other sites
Posted June 4, 2015 This makes sense on why Nvidia are so hyping NVLink They need to make a version of SLI so much better than DX12 shared resources in order to encourage people to just buy Nvidia cards. NVLink is more geared towards HPC, not consumer use. Link to post Share on other sites
Posted June 4, 2015 · Original PosterOP I wonder if this will improve dual card frame latency, because as it is now, at 50fps with 2 cards, you're actually getting 25fps equivalent input latency, if not more. Not really. Both cards are working in unison at 50fps. So your input latency is still at 50fps. With this new implementation, your input latency would improve by however much your frame rate improves by. Anyway i think the iGPU co-processing is still nonesense,i still see iGPUs useless and toxic,they need to go away from dedicated desktop pc's.Think about it for a second theres no way an iGPU that uses slow DDR3 -4 RAM can keep up with 7GHZ gddr5 or HBM,its not even close in bus width to a gpu, and HBM with 1024 bit bus its going to get even more crazy.Just let the iGPU's go we dont need em,all we need is to be able to use cross-vendor and cross-gen gpu's.Just the tought of buying a future hbm amd card and offloading some stuff to my old gtx 670 makes me "wet" ,not having to throw away a good card. iGPUs will never go away because the majority of users don't need or use discreet graphics. This new implementation of combining multiple GPUs does not work the way you're describing. The Vram on the dGU and the system ram being used by the iGPU does not have to be the same speed/type etc. How it will works is what ever data is necessary for whatever task the iGPU is assigned, that data is loaded to the system ram only. This cuts down on redundant data copying/transfers unlike current multi GPU methods. The issues you described would not be a problem. Praise the sun! \[T]/ My Systems: Spoiler Spoiler GAMING PC: i5-4570 // Deepcool Gabriel with TF120 fan // Asus Z97-E // Kingston Hyper X Black 2x4GB 1600 // Asus GTX 980 Strix OC @1459 // Samsung 840 120GB SSD // 2x Seagate Barracuda 1TB HDDs // EVGA Supernova 850GS // NZXT S340 White // 26" Acer (1920x1200) // 24" Asus (1920x1080) // Corsair Vengeance K70 MX Reds // Razer Naga Epic // Logitech G27 // Windows 10 Spoiler HTPC (old): Athlon 5350 APU // ASRock AM1H-ITX // AMD ES 1x4GB 1600 // PNY GeForce GT 730 1GB GDDR5 // AData SP600 128GB SSD // Silverstone ST45SF 450w 80+ Bronze // Silverstone ML05B // // Ubuntu Gnome 15.10 Spoiler MAIN LAPTOP: Acer E5-772G-59VG // 17.3" 1600x900 // i5-5200U 2.2GHz Dual-Core HT // Geforce 940m 2GB // 2x4GB DDR3L // Mushkin 480GB SSD // Windows 10 OLD LAPTOP: HP Pavilion dv5 // 14" 1366x768 // AMD Turion II P540 2.4GHz Dual-Core // Radeon HD 4250 1GB // 2x2GB RAM // 500GB 7200 HDD // Ubuntu 15.10 Spoiler STALKER v2.0 (HTPC): Pentium G3258 @3.8 // Gigabyte GA-H87N-Wifi // AData XPG 2x4GB 1600 // Asus GTX 460 1GB GDDR5 // AData SP600 128GB SSD // Cooler Master V650 Gold // Custom Built mATX chassis // 55" LG LED 1080p TV // Logitech wireless touchpad-keyboard // Ubuntu Gnome 16.04 // Build Log Link to post Share on other sites
Posted June 4, 2015 Every time I see this subject and then the "depends only on the developers" I get frustrated... I'm serious(this time), last years we saw dozens examples about not so committed developers and do you think that they will support this? I don't think so... And AMD and Nvidia supporting developers to do these multi gpus combos? Wouldn't believe... Life can't be this good, the world is not fair... Link to post Share on other sites
Posted June 4, 2015 "up to developers" means "up to publishers"*sigh* "When you're in high school you should be doing things, about which you could never tell your parents!" Link to post Share on other sites
Posted June 4, 2015 · Original PosterOP One major question I have about this is how will the drivers work and interact? AMD GPUs with Nvidia GPUs will probably be the most problematic combination and may not come to fruition. Would require both parties to work together and I can't see them collaborating on something like that. AMD would be willing, but Nvidia, I doubt it. At least we know it does work for sure combining an APU with a dGPU. Praise the sun! \[T]/ My Systems: Spoiler Spoiler GAMING PC: i5-4570 // Deepcool Gabriel with TF120 fan // Asus Z97-E // Kingston Hyper X Black 2x4GB 1600 // Asus GTX 980 Strix OC @1459 // Samsung 840 120GB SSD // 2x Seagate Barracuda 1TB HDDs // EVGA Supernova 850GS // NZXT S340 White // 26" Acer (1920x1200) // 24" Asus (1920x1080) // Corsair Vengeance K70 MX Reds // Razer Naga Epic // Logitech G27 // Windows 10 Spoiler HTPC (old): Athlon 5350 APU // ASRock AM1H-ITX // AMD ES 1x4GB 1600 // PNY GeForce GT 730 1GB GDDR5 // AData SP600 128GB SSD // Silverstone ST45SF 450w 80+ Bronze // Silverstone ML05B // // Ubuntu Gnome 15.10 Spoiler MAIN LAPTOP: Acer E5-772G-59VG // 17.3" 1600x900 // i5-5200U 2.2GHz Dual-Core HT // Geforce 940m 2GB // 2x4GB DDR3L // Mushkin 480GB SSD // Windows 10 OLD LAPTOP: HP Pavilion dv5 // 14" 1366x768 // AMD Turion II P540 2.4GHz Dual-Core // Radeon HD 4250 1GB // 2x2GB RAM // 500GB 7200 HDD // Ubuntu 15.10 Spoiler STALKER v2.0 (HTPC): Pentium G3258 @3.8 // Gigabyte GA-H87N-Wifi // AData XPG 2x4GB 1600 // Asus GTX 460 1GB GDDR5 // AData SP600 128GB SSD // Cooler Master V650 Gold // Custom Built mATX chassis // 55" LG LED 1080p TV // Logitech wireless touchpad-keyboard // Ubuntu Gnome 16.04 // Build Log Link to post Share on other sites
Posted June 4, 2015 This makes sense on why Nvidia are so hyping NVLink They need to make a version of SLI so much better than DX12 shared resources in order to encourage people to just buy Nvidia cards.That's not the reason. NVLink is for compute tasks where bandwidth is a problem. IBM has had NVLink 1.0 implemented in their Power 8 series motherboards for a long time already. Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd Link to post Share on other sites