Jump to content

AMD R9 390X with 8 gigs of ram?

cluelessgenius

UE4, Unity 5, CryEngine & Frostbite all use deferred rendering techniques TODAY.

Rendering depth, albedo, normal buffers can all be done separate from each other, it's only when we're doing lighting and final compositing that we're actually waiting for the prerequisite buffers to complete rendering.

On top of that you can also utilize these GPU's for general compute, I don't think some rendering "downtime" is a bad thing to have. Note that this is already in place now, It's just that Engine makers can now actually fully control what each GPU does, which should boost performance (reduce "downtime" vs SLI/Crossfire) and efficiency (avoid VRAM duplication).

For some things where data duplication is unavoidable due to performance considerations, the workload can be split up across x amount of GPUs. They could potentially even offer you the choice of VRAM vs rendering speed. (I doubt we'll ever see that outside of a tech demo) Frostbite enforces performance by rendering in tiles across GPUs, that's why you see really good SLI/Crossfire scaling in BF4. But they could do so much more.

 

Well, you cannot render a normal buffer without all the normal texture data being present for the thing that you are tying to render. In other words either the texture data would reside on both GPUs (copied as it is now) or one GPU would need to fetch it from the other each and every rendered frame. (that would be shitty.) AFR would only be helpful in terms of pure rendering performance, but it would put you in the exact same boat as SLI / Crossfire in terms of VRAM. This is however the easiest & laziest way to implement some DX12 & Vulcan benefits, which means that it is the first thing you'll actually see being used in games :)

I'm not a Graphics Programmer in function ( I'm a 3D Artist / Level Designer ), but I did study it in University and have written my own little engine in DX11, while also doing some shader work in Unity 4, Unreal 3-4 and Cryengine (DEV version just before Crysis 2 dropped). So I somewhat know what I'm talking about, but I haven't kept up to speed the last 1-2 years in terms of the graphics programming, and this industry is very fast moving.

I'd like to see some technical talks about it, but atm I doubt you'll see any official statements from major Game Engine houses as the spec isn't even out yet, let alone properly implemented. Sorry for the rant btw.

 

Thank you for taking the time to post this.

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

The 980 across the board beats the 290X in 4K.

it beats it by like 6-8 fps... which is nothing...

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

it beats it by like 6-8 fps... which is nothing...

1) That difference is a lot when no single card is getting 60fps at 4K anyway, which is the fastest we can drive the standard currently as well. Did someone never learn proportions?

2) Since the newest WHQL drivers the difference is closer to 10-12 fps, and 15% is certainly an improvement worthy of note.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1) That difference is a lot when no single card is getting 60fps at 4K anyway, which is the fastest we can drive the standard currently as well. Did someone never learn proportions?

2) Since the newest WHQL drivers the difference is closer to 10-12 fps, and 15% is certainly an improvement worthy of note.

the new drivers improve perfromance in GTA 5  not across the board....

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

1) That difference is a lot when no single card is getting 60fps at 4K anyway, which is the fastest we can drive the standard currently as well. Did someone never learn proportions?

2) Since the newest WHQL drivers the difference is closer to 10-12 fps, and 15% is certainly an improvement worthy of note.

  1. 60 FPS at 4k is perfectly possible with most high tier cards, maxed out in-game settings is not a valid argument.
  2. The GTX 980 only performs roughly around 3-8 FPS better than the R9 290X at 4k resolution.
Link to comment
Share on other sites

Link to post
Share on other sites

 

  1. 60 FPS at 4k is perfectly possible with most high tier cards, maxed out in-game settings is not a valid argument.
  2. The GTX 980 only performs roughly around 3-8 FPS better than the R9 290X at 4k resolution.

 

1) No, max settings (no AA) is and has been the standard for years.

2) No, 8 is easily the minimum now.

 

 

the new drivers improve perfromance in GTA 5  not across the board....

 

WHQL, not beta. Please be more careful. Look at the driver just before the GTA V-specific one. 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1) No, max settings (no AA) is and has been the standard for years.

2) No, 8 is easily the minimum now.

  1. There is no standard when it comes to benchmarking as long as every game is benched at the same settings.
  2. You have sources to backup these claims?
Link to comment
Share on other sites

Link to post
Share on other sites

 

  1. There is no standard when it comes to benchmarking as long as every game is benched at the same settings.
  2. You have sources to backup these claims?

 

Tom's Hardware and overclockers.uk both keep up-to-date bench lists for all new drivers. You should know that by now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Tom's Hardware and overclockers.uk both keep up-to-date bench lists for all new drivers. You should know that by now.

Tom's Hardware is a joke of a source (you should know that by now) and I don't browse around just to look at benchmarks. Also overclockers.uk is not a valid domain. Just paste the direct links to them here to make it easier on everybody that's interested.

Link to comment
Share on other sites

Link to post
Share on other sites

first result I found

 

BF4 @ 4k

 

290    = 35 fps avg

290x  = 36 fps avg

980    = 41 fps avg

980G1= 45 fps avg

295x2= 67 fps avg

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

first result I found

 

BF4 @ 4k

 

290    = 35 fps avg

290x  = 36 fps avg

980    = 41 fps avg

980G1= 45 fps avg

295x2= 67 fps avg

Mantle vs. DX 11. Not remotely valid. When BF4 goes DX 12 we can discuss that.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Tom's Hardware is a joke of a source (you should know that by now) and I don't browse around just to look at benchmarks. Also overclockers.uk is not a valid domain. Just paste the direct links to them here to make it easier on everybody that's interested.

It's a better source than almost anything. Who would you put above them? Anandtech is about the only one which adheres to a higher standard.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's a better source than almost anything. Who would you put above them? Anandtech is about the only one which adheres to a higher standard.

Anandtech is where I validated my previous claims.

Link to comment
Share on other sites

Link to post
Share on other sites

Anandtech is where I validated my previous claims.

1) You validated nothing as you did nothing to prove bandwidth was an issue in gaming (it isn't). You quoted the Async compute pipeline, not the async shading pipeline which, omg, has 32ms delay just like the standard!

 

2) HBM will do nothing for most people outside the scientific computing field. Games are currently choked on the shaders, not what's feeding them.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1) You validated nothing as you did nothing to prove bandwidth was an issue in gaming (it isn't). You quoted the Async compute pipeline, not the async shading pipeline which, omg, has 32ms delay just like the standard!

 

2) HBM will do nothing for most people outside the scientific computing field. Games are currently choked on the shaders, not what's feeding them.

Bandwidth is not an issue per-say as long as the shaders are kept fed. I hinted at the benefit of poking twice as many registers there's a difference and a similarity.

 

The graph if for Tomorrows Children that utilizes asynchronous shading (computing) just like Thief does cutting out 6-10ms on the graphics stack.

Link to comment
Share on other sites

Link to post
Share on other sites

 

  1. 60 FPS at 4k is perfectly possible with most high tier cards, maxed out in-game settings is not a valid argument.
  2. The GTX 980 only performs roughly around 3-8 FPS better than the R9 290X at 4k resolution.

 

That's their decision to go with the minimum memory bandwidth biting Nvidia in the ass. 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

holy snot, EIGHT?

most gamers could probably get away with 6.

if this is true, WP amd

4690K // 212 EVO // Z97-PRO // Vengeance 16GB // GTX 770 GTX 970 // MX100 128GB // Toshiba 1TB // Air 540 // HX650

Logitech G502 RGB // Corsair K65 RGB (MX Red)

Link to comment
Share on other sites

Link to post
Share on other sites

Bandwidth is not an issue per-say as long as the shaders are kept fed. I hinted at the benefit of poking twice as many registers there's a difference and a similarity.

 

The graph if for Tomorrows Children that utilizes asynchronous shading (computing) just like Thief does cutting out 6-10ms on the graphics stack.

It's well within margin of error for 5 tests. Bull to the shit.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

do you guys think this will make nvidia cut the price of the titan x, or release a 980 ti cheaper than the 390x?

CPU- i7 5960x MOTHERBOARD- Asus Rampage V extreme RAM- 32gb Corsair Dominator Platinum ddr4 2800mhz GPU-  2X EVGA GTX 980 SC in SLI PSU- Corsair ax860 CASE- Corsair Obsidian 750d COOLING- EK cpu+dual gpu custom loop (ek supremacy evo, dual gtx 980 copper/acetal waterblocks) MOUSE- Logitech g502 proteus core KEYBOARD- Ducky shine 3 cherry mx blue switches and blue LED MONITOR- Samsung u28d590d UHD  STORAGE -  120 gb samsung 850 evo ssd, 960 gb ocz trion ssd OS- Windows 10 pro http://pcpartpicker.com/p/jtP8GX

Link to comment
Share on other sites

Link to post
Share on other sites

do you guys think this will make nvidia cut the price of the titan x, or release a 980 ti cheaper than the 390x?

a 980ti would probably be comparable to a 390x in both price and performance imo. (full disclosure: entire speculation on all point just made)

Link to comment
Share on other sites

Link to post
Share on other sites

do you guys think this will make nvidia cut the price of the titan x, or release a 980 ti cheaper than the 390x?

Nvidia will likely re-brand the TITAN X as the GTX 980 Ti and shave off half of its VRAM to cut costs along with raise its reference clocks to compete with Fiji.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia will likely re-brand the TITAN X as the GTX 980 Ti and shave off half of its VRAM to cut costs along with raise its reference clocks to compete with Fiji.

That's what I'm hoping they're going to do

CPU- i7 5960x MOTHERBOARD- Asus Rampage V extreme RAM- 32gb Corsair Dominator Platinum ddr4 2800mhz GPU-  2X EVGA GTX 980 SC in SLI PSU- Corsair ax860 CASE- Corsair Obsidian 750d COOLING- EK cpu+dual gpu custom loop (ek supremacy evo, dual gtx 980 copper/acetal waterblocks) MOUSE- Logitech g502 proteus core KEYBOARD- Ducky shine 3 cherry mx blue switches and blue LED MONITOR- Samsung u28d590d UHD  STORAGE -  120 gb samsung 850 evo ssd, 960 gb ocz trion ssd OS- Windows 10 pro http://pcpartpicker.com/p/jtP8GX

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×