Jump to content

AMD R9 390X with 8 gigs of ram?

cluelessgenius

no...

It is not. its barely enough for 1440p

Any game that does not utilize 4gb of VRAM at a resolution of 4k is NOT taking advantage of the resolution... Assets in the game lack the texture detail that can be presented on a 4k screen and many assets in the distance are not present at all or are low quality versions as a result of current GPU and VRAM limitations.

4gb VRAM is HOLDING BACK modern GPU's from using high-quality assets (and as such developers and consoles with 8gb ram). I bash the VRAM limiter frequently on my 980 and am bombarded with low LOD in games... I run some AAA games at 2560x1440 as it offers no loss in fidelity vs 4k, only adds some more jaggies, everything in the distance is a low quality wash in most games anyway...

I commend developers who release high-quality texture packs and provide LOD adjustment sliders so that in future people can take advantage of the game in the way the developers imagined it without the hardware limitations. Games that ship with low quality assets just so that people with 2gb video cards can play them are not worth playing at anything over 1080p.

6gb-8gb will be adequate for 1080p for at least until this current gen of consoles is finished. as that's what they have on tap.

You could argue that current GPU's cant handle the quality that would see 4gb of VRAM @ 4k exceeded however when considering SLI a 4GB 980/970/290x you are limited by the VRAM.

Oh please, so much of your VRAM isn't being used properly it's astonishing. Run some memory profiling on your VRAM while gaming. More than 30% of your occupied VRAM at any given point of time is idling. It's bad programming, not actual requirements of the textures and rendering techniques.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Oh please, so much of your VRAM isn't being used properly it's astonishing. Run some memory profiling on your VRAM while gaming. More than 30% of your occupied VRAM at any given point of time is idling. It's bad programming, not actual requirements of the textures and rendering techniques.

 

Do you have a suggested application to use to check this?

 

Look... I seriously doubt several major AAA games are all wasting 30% of their claimed VRAM... would you care to explain?

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

That's still way too high imo but if performance outweighs all then I guess it's justifiable. 

 

It's a 40% improvement in TDP over the last generation. That is a huge improvement (and more than nVidia managed in the Kepler -> Maxwell switch). Also, that 300W is the peak load. Nobody runs their graphics card at 100% load for hours. Also, nVidia cards can get rather close to 300W when you thrash them too. In addition, I would posit that the people who are in the market for a 700+ dollar graphics card are not in the situation that they would need to care about a graphics card drawing 50W more from a cost PoV. Realistically, it wouldn't cost you more than 20 dollars per year. A single take-out pizza can cost you that much. If you really thrashed the card for 4 hours per night (running it at 100% load which is totally unrealistic because DX11.2 cannot deliver instructions to the GPU fast enough to saturate a card with those rumoured specs), 5 nights a week, 50W more is going to be about 40 dollars per year assuming a 10 cent per kWh rate (and you could probably save some of that off your electricity bill :)).

 

I will buy a 300 series card. It may very well be the last series of cards AMD ever makes.

 

AMD reduced their losses from nearly 300 million in Q4 2014 to 180 million in Q1 2015 without a major product launch. I don't think they're going out of business. They may be bought out by another company (like Samsung is rumoured to do), but I sincerely doubt AMD will be shut down.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

Lol no.

http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html

The only time there's a minor difference is with shit like 4x SSAA or uncompressed textures. And they didn't even mention clock speed, which might be different. When you have the 4 Gb and 8 GB model trading blows in games, it's a sign that any difference between the two is random variance and not any actual real world difference.

Don't get me wrong, there are benefits to more VRAM with multimonitor displays or extreme anti-aliasing etc. But even for 4K, 4 GB is lots right now. At some point, memory bandwidth is going to matter more than the amount of VRAM and that's where AMD is kicking ass while Nvidia is held back.

AMD is not going to throw another 4 GB of expensive HBM memory onto a card because people think they need it. They'd much rather release a cheaper 4 GB card that will still perform like a champ for 99% of users.

The point is not what requirements games have now. It's what they'll need in the future. A 390x will remain relevant as a card until late 2018 or longer - the 7970 is still a thing, look at how long it's lasted. I think AMD looked at the 680 and 580 and said to themselves: those cards are falling behind because they simply don't have the vram to feed their cores. I mean, at 1080p games have been using over 2gb for a long while. The 680 is struggling as a result. Not to mention the 580. Sli 580s are plenty powerful, but they simply can't keep up with vram requirements.

If amd puts 8gb of vram, they're planning for the longevity of the card. Crossfire 390x's with 8gb might be relavant until 2020, if current trends continue. But 4gb won't be.

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

The point is not what requirements games have now. It's what they'll need in the future. A 390x will remain relevant as a card until late 2018 or longer - the 7970 is still a thing, look at how long it's lasted. I think AMD looked at the 680 and 580 and said to themselves: those cards are falling behind because they simply don't have the vram to feed their cores. I mean, at 1080p games have been using over 2gb for a long while. The 680 is struggling as a result. Not to mention the 580. Sli 580s are plenty powerful, but they simply can't keep up with vram requirements.

If amd puts 8gb of vram, they're planning for the longevity of the card. Crossfire 390x's with 8gb might be relavant until 2020, if current trends continue. But 4gb won't be.

 

Yeah, my 7950 (i.e. an R9 280) still has a lot of life left in it. It has massive overclocking headroom due to it being an early SKU with the VRM unlocked up to 1300mV (stock is something like 1025mV) and 3GB of VRAM (also overclockable) is plenty enough for the forseeable future at 1080p.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

Double the memory gives the 290X a 14% average boost at 4k, price difference is more than that however. It would make sense to use that much the flagship card.

 

http://www.tomshardware.com/reviews/sapphire-vapor-x-r9-290x-8gb,3977-6.html

 

If the 390X has a 300W tdp and performs similarly to the Titan X, it means AMD caught up to nvidia quite a bit, it's only a 50W difference.

Link to comment
Share on other sites

Link to post
Share on other sites

Do you have a suggested application to use to check this?

Look... I seriously doubt several major AAA games are all wasting 30% of their claimed VRAM... would you care to explain?

Kcachegrind if you can find a windows binary. It's quite simple. They throw almost everything into the frame buffer and see what sticks. The thing is the bulk of game textures will not be used moment to moment. You can easily leave them in system RAM until you get close to an area where you are to switch maps. I've seen this in Shadow of Mordor, ACU, and some older titles too.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You forget not once has AMD confirmed the specs of Fiji. Everything's been rumorville up to now. Also, Hassan reported on what was living in plain sight. That's typical daily journalism.

As per the faked benches, simply Google 390x faked benchmarks. SiSoft and 3DMark were both fooled.

Also it's entirely unreasonable. Internal Bandwidth has not been a bottleneck to gaming for a very long time. It's still shader and TMU/ROP bound. The Asynchronous Shading demo is proof of this for AMD. The boost in memory bandwidth will be negligible for performance gains. As per the shader boost, scaling is not 100%! 30% boost at the same clocks is a generous estimate.

They don't need to confirm the specifications of the product for us to get a good idea of what's to come. We've predicted several cards in the past with 100% accuracy including the most recent TITAN X.

 

You said the benchmarks leaked from AMD were fake so can you provide a source?

 

Bandwidth is a problem with a lot of graphics cards today. Take a look how the 290X overthrows the 980 at 4k gaming. Being able to poke twice as many registers at one time surely has it's advantage. The Asynchronous Shading concept is just another way of boosting performance by executing several tasks at once for them to complete in a timely manner just like any other threading model.

 

TomorrowChildren33.jpg?eaa32f

 

The performance advantage of HBM will rear its face in 1440p and even more so in 4k gaming. You're forgetting that the GCN revision being used is not just a marginal update. The latest GCN revision is quite a giant step forward for GCN in both performance and power efficiency. AMD is extremely close to Maxwell power efficiency with this new generation. On average the 390X is 57% faster than the 290X according to AMD in DirectX 11 titles.

Link to comment
Share on other sites

Link to post
Share on other sites

They don't need to confirm the specifications of the product for us to get a good idea of what's to come. We've predicted several cards in the past with 100% accuracy including the most recent TITAN X.

 

You said the benchmarks leaked from AMD were fake so can you provide a source?

 

Bandwidth is a problem with a lot of graphics cards today. Take a look how the 290X overthrows the 980 at 4k gaming. Being able to poke twice as many registers at one time surely has it's advantage. The Asynchronous Shading concept is just another way of boosting performance by executing several tasks at once for them to complete in a timely manner just like any other threading model.

 

TomorrowChildren33.jpg?eaa32f

 

The performance advantage of HBM will rear its face in 1440p and even more so in 4k gaming. You're forgetting that the GCN revision being used is not just a marginal update. The latest GCN revision is quite a giant step forward for GCN in both performance and power efficiency. AMD is extremely close to Maxwell power efficiency with this new generation. On average the 390X is 57% faster than the 290X according to AMD in DirectX 11 titles.

AMD should never have boought ATI, maybe co-operated with them but never bought outright. Because then we might have good CPU's with a nice price/performance ratio, similar to the way we have GPU's with a nice price/performance ratio. And in regards to 4k gaming with R9 290/290X, AMD designed them for 4K.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

The point is not what requirements games have now. It's what they'll need in the future. A 390x will remain relevant as a card until late 2018 or longer - the 7970 is still a thing, look at how long it's lasted. I think AMD looked at the 680 and 580 and said to themselves: those cards are falling behind because they simply don't have the vram to feed their cores. I mean, at 1080p games have been using over 2gb for a long while. The 680 is struggling as a result. Not to mention the 580. Sli 580s are plenty powerful, but they simply can't keep up with vram requirements.

If amd puts 8gb of vram, they're planning for the longevity of the card. Crossfire 390x's with 8gb might be relavant until 2020, if current trends continue. But 4gb won't be.

 

Thats why I hope its an 8gb card

 

Yeah, my 7950 (i.e. an R9 280) still has a lot of life left in it. It has massive overclocking headroom due to it being an early SKU with the VRM unlocked up to 1300mV (stock is something like 1025mV) and 3GB of VRAM (also overclockable) is plenty enough for the forseeable future at 1080p.

 

that extra 1gb of vram is really worthwhile, I used my 280x 3gb for 4k games and triple 1080

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

So what I'm getting from this thread is people are fine getting yearly remakes of the same AAA game....err I mean graphics card? Like what is the point of buying a new graphics card if the specs are virtually the same year to year (imagine the 1570 being a slight upgrade form a 970 still with a total of 4 GB VRAM).

 

Sorry but if a company wants me to purchase their new and improved product, they need to have something in that product that makes me want to buy it. Seeing the same size of VRAM on cards for what, 3 generations now?, is the reason why I'm still sitting on an HD 6850. (and before anyone says there wasn't, there were 4GB GTX 680's made)

Link to comment
Share on other sites

Link to post
Share on other sites

So what I'm getting from this thread is people are fine getting yearly remakes of the same AAA game....err I mean graphics card? Like what is the point of buying a new graphics card if the specs are virtually the same year to year (imagine the 1570 being a slight upgrade form a 970 still with a total of 4 GB VRAM).

 

Sorry but if a company wants me to purchase their new and improved product, they need to have something in that product that makes me want to buy it. Seeing the same size of VRAM on cards for what, 3 generations now?, is the reason why I'm still sitting on an HD 6850. (and before anyone says there wasn't, there were 4GB GTX 680's made)

So you would be against buying the 4096 SPU R9 390X behemoth because it only has 8GB of HBM?

 

There's no game on the consumer market that can saturate that much memory.

Link to comment
Share on other sites

Link to post
Share on other sites

So you would be against buying the 4096 SPU R9 390X behemoth because it only has 8GB of HBM?

 

There's no game on the consumer market that can saturate that much memory.

Yep, even at 4k you'd be lucky to use 7GB vRAM.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

They don't need to confirm the specifications of the product for us to get a good idea of what's to come. We've predicted several cards in the past with 100% accuracy including the most recent TITAN X.

You said the benchmarks leaked from AMD were fake so can you provide a source?

Bandwidth is a problem with a lot of graphics cards today. Take a look how the 290X overthrows the 980 at 4k gaming. Being able to poke twice as many registers at one time surely has it's advantage. The Asynchronous Shading concept is just another way of boosting performance by executing several tasks at once for them to complete in a timely manner just like any other threading model.

TomorrowChildren33.jpg?eaa32f

The performance advantage of HBM will rear its face in 1440p and even more so in 4k gaming. You're forgetting that the GCN revision being used is not just a marginal update. The latest GCN revision is quite a giant step forward for GCN in both performance and power efficiency. AMD is extremely close to Maxwell power efficiency with this new generation. On average the 390X is 57% faster than the 290X according to AMD in DirectX 11 titles.

The 290X does not beat the 980 in 4K. What crack are you on? Also, no, it's an incremental update. I've seen the 285 and it does not live up to the hype. The 390x won't either.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Too bad it'll all cost an arm and a leg to buy. t-t

Source?

My PC: i7 3770k @ 4.4 Ghz || Hyper 212 Evo || Intel Extreme Motherboard DZ77GA || EVGA Hybrid 980ti || Corsair Vengeance Blue 16GB || Samsung 840 Evo 120 GB || WD Black 1TB

 

Peripherals: Corsair K70 RGB || Sentey Pro Revolution Gaming Mouse || Beyerdynamic DT 990 Premium 250 Ohm Headphone || Benq XL2420Z Monitor

Link to comment
Share on other sites

Link to post
Share on other sites

The 290X does not beat the 980 in 4K. What crack are you on? Also, no, it's an incremental update. I've seen the 285 and it does not live up to the hype. The 390x won't either.

Going from performing on par with the GTX 970 at FHD it jumps up to competing with the GTX 980 in 4k benchmarks. Also Fiji is not based on Volcanic Islands it's another revision of the GCN architecture. Rumor has it that AMD went back to the drawing board using Tahiti as the base for Fiji.

Link to comment
Share on other sites

Link to post
Share on other sites

Going from performing on par with the GTX 970 at FHD it jumps up to competing with the GTX 980 in 4k benchmarks. Also Fiji is not based on Volcanic Islands it's another revision of the GCN architecture. Rumor has it that AMD went back to the drawing board using Tahiti as the base for Fiji.

The 980 across the board beats the 290X in 4K.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The 980 across the board beats the 290X in 4K.

The 980 across the board beats the 290X in 4K.

The point is that the gap between the two shrinks significantly. and something like the 970 goes from "winning sometimes" to falling behind to 290 levels.

compare to say the titanX which retaind the same relayive performance to the 980 as resolution increases.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

The point is not what requirements games have now. It's what they'll need in the future. A 390x will remain relevant as a card until late 2018 or longer - the 7970 is still a thing, look at how long it's lasted. I think AMD looked at the 680 and 580 and said to themselves: those cards are falling behind because they simply don't have the vram to feed their cores. I mean, at 1080p games have been using over 2gb for a long while. The 680 is struggling as a result. Not to mention the 580. Sli 580s are plenty powerful, but they simply can't keep up with vram requirements.

If amd puts 8gb of vram, they're planning for the longevity of the card. Crossfire 390x's with 8gb might be relavant until 2020, if current trends continue. But 4gb won't be.

 

The 580's vram was a mistake at the time. You had aftermarket 3GB versions because this flagship product was being marketed on the basis of Surround and didn't have the vram needed for Surround back in 2011. If you weren't using multi-monitors, though, I think the time when games start to need more vram tends to come about the same time that the GPU itself starts to feel not so high end anymore in general, so is a good time to upgrade.

 

AMD are better at thinking "what might people want to do with multiples of these cards" and giving enough vram for that, but I think it's a mistake to assume that that means that Nvidia just don't have enough for single card use. The 290X 8GB is stupid. There's no other word for it -- 290X CF will not demand that much at 4K, you will struggle to use more than 4 almost ubiquitously. If, however, the 390X, the Titan X all have so much vram because they're expecting 4K Eyefinity/Surround, 5K, 6K with two or more of these GPUs together to be a thing, then that's when I think that the 6, 8 and 12GB frame buffers might actually make sense.

Link to comment
Share on other sites

Link to post
Share on other sites

So you would be against buying the 4096 SPU R9 390X behemoth because it only has 8GB of HBM?

 

There's no game on the consumer market that can saturate that much memory.

No I would dislike it if said GPU only had 4 GB of ram. What Im amazed at is people saying it shouldn't have 8 GB of vram becauase 90% of current games don't use even 4.

 

super late edit.

Link to comment
Share on other sites

Link to post
Share on other sites

Having a 40" 2160p screen myself and trying to run all games at that resolution with a 290, I really want the next one to have 8GB of vram. Just to be somewhat future proof. There is currently no option on the market that has the horsepower + vram to handle 4K comfortably except the Titan Z or Titan X, and those are immensely expensive. GTA5 is just under 4GB vram usage at maximum without MSAA 2160p.

Vulcan and DX12 may hopefully open the gates for combined VRAM usage between GPUs, but I think that it will take another 3 years till engines incorporate the full Vulcan & DX12 spec.

That's no moon, that's a death ball !
K'Nex Server -- R9 290 Alpenföhn Peter Review -- Philips BDM4065UC Review
CPU Intel i5-4760K @ 4.3Ghz MEM 4x 4GB Cucial Ballistix 1600 LP MOBO Asus Maximus VI Gene GPU 980Ti G1 @ 1.47Ghz SSD 3x Samsung 840 EVO 240GB Raid0 CASE Silverstone SG10 DISPLAY Philips BDM4065UC 40" UHD

Link to comment
Share on other sites

Link to post
Share on other sites

Having a 40" 2160p screen myself and trying to run all games at that resolution with a 290, I really want the next one to have 8GB of vram. Just to be somewhat future proof. There is currently no option on the market that has the horsepower + vram to handle 4K comfortably except the Titan Z or Titan X, and those are immensely expensive. GTA5 is just under 4GB vram usage at maximum without MSAA 2160p.

Vulcan and DX12 may hopefully open the gates for combined VRAM usage between GPUs, but I think that it will take another 3 years till engines incorporate the full Vulcan & DX12 spec.

 

the problem iwth sharing VRAM is that the interconnect between the cards will never be fast enough... PCIE 4.0 standard is 2,000 mb/s per lane. even with pcie 4.0 16x SLI the cards can only share ram at 38gb/s (6gb/s from SLI bridge)

 

The titan X has around 336gb/s ram speeds...

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

the problem iwth sharing VRAM is that the interconnect between the cards will never be fast enough... PCIE 4.0 standard is 2,000 mb/s per lane. even with pcie 4.0 16x SLI the cards can only share ram at 38gb/s (6gb/s from SLI bridge)

There would be no need for one GPU to access the memory of another, since the specific workload and VRAM can be managed per GPU, which is not possible in the current gfx pipeline.

Practical example:

Since mesh data is small in GPU memory, this could be copied to each GPU.

From there, you can distribute the different render passes and corresponding textures to each GPU separately. The only real cost you'll get is moving each buffer to the display GPU to merge them, which is really not a problem at all in the grand scheme of things.

That's no moon, that's a death ball !
K'Nex Server -- R9 290 Alpenföhn Peter Review -- Philips BDM4065UC Review
CPU Intel i5-4760K @ 4.3Ghz MEM 4x 4GB Cucial Ballistix 1600 LP MOBO Asus Maximus VI Gene GPU 980Ti G1 @ 1.47Ghz SSD 3x Samsung 840 EVO 240GB Raid0 CASE Silverstone SG10 DISPLAY Philips BDM4065UC 40" UHD

Link to comment
Share on other sites

Link to post
Share on other sites

There would be no need for one GPU to access the memory of another, since the specific workload and VRAM can be managed per GPU, which is not possible in the current gfx pipeline.

Practical example:

Since mesh data is small in GPU memory, this could be copied to each GPU.

From there, you can distribute the different render passes and corresponding textures to each GPU separately. The only real cost you'll get is moving each buffer to the display GPU to merge them, which is really not a problem at all in the grand scheme of things.

 

I'm making assumptions here.

 

This would help for render passes that are additive, but any that must be performed sequentially would see one GPU waiting on the other, frametime then gets blown out. I imagine most shaders must be applied in order, one after the other.

 

AFR (alternate frame rendering) might benefit from simultaneous passes between GPU's without sharing memory resources though other methods of SLI would not as both GPU's render a shared potion of one frame.

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

...

This would help for render passes that are additive, but any that must be performed sequentially would see one GPU waiting on the other, frametime then gets blown out. I imagine most shaders must be applied in order, one after the other.

UE4, Unity 5, CryEngine & Frostbite all use deferred rendering techniques TODAY.

Rendering depth, albedo, normal buffers can all be done separate from each other, it's only when we're doing lighting and final compositing that we're actually waiting for the prerequisite buffers to complete rendering.

On top of that you can also utilize these GPU's for general compute, I don't think some rendering "downtime" is a bad thing to have. Note that this is already in place now, It's just that Engine makers can now actually fully control what each GPU does, which should boost performance (reduce "downtime" vs SLI/Crossfire) and efficiency (avoid VRAM duplication).

For some things where data duplication is unavoidable due to performance considerations, the workload can be split up across x amount of GPUs. They could potentially even offer you the choice of VRAM vs rendering speed. (I doubt we'll ever see that outside of a tech demo) Frostbite enforces performance by rendering in tiles across GPUs, that's why you see really good SLI/Crossfire scaling in BF4. But they could do so much more.

 

AFR (alternate frame rendering) might benefit from simultaneous passes between GPU's without sharing memory resources though other methods of SLI would not as both GPU's render a shared potion of one frame.

Well, you cannot render a normal buffer without all the normal texture data being present for the thing that you are tying to render. In other words either the texture data would reside on both GPUs (copied as it is now) or one GPU would need to fetch it from the other each and every rendered frame. (that would be shitty.) AFR would only be helpful in terms of pure rendering performance, but it would put you in the exact same boat as SLI / Crossfire in terms of VRAM. This is however the easiest & laziest way to implement some DX12 & Vulcan benefits, which means that it is the first thing you'll actually see being used in games :)

I'm not a Graphics Programmer in function ( I'm a 3D Artist / Level Designer ), but I did study it in University and have written my own little engine in DX11, while also doing some shader work in Unity 4, Unreal 3-4 and Cryengine (DEV version just before Crysis 2 dropped). So I somewhat know what I'm talking about, but I haven't kept up to speed the last 1-2 years in terms of the graphics programming, and this industry is very fast moving.

I'd like to see some technical talks about it, but atm I doubt you'll see any official statements from major Game Engine houses as the spec isn't even out yet, let alone properly implemented. Sorry for the rant btw.

That's no moon, that's a death ball !
K'Nex Server -- R9 290 Alpenföhn Peter Review -- Philips BDM4065UC Review
CPU Intel i5-4760K @ 4.3Ghz MEM 4x 4GB Cucial Ballistix 1600 LP MOBO Asus Maximus VI Gene GPU 980Ti G1 @ 1.47Ghz SSD 3x Samsung 840 EVO 240GB Raid0 CASE Silverstone SG10 DISPLAY Philips BDM4065UC 40" UHD

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×