Jump to content

AMD R9 390X with 8 gigs of ram?

cluelessgenius

If the 980Ti is real and will *only* have 6GB vram, this will be a nice move by AMD to one-up Nvidia. :D

 

This is going to be an awesome year for high-end GPUs. My R9 290 now suddenly feels... mediocre. lol

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know just wish I had it. ;_;

who doesnt

"You know it'll clock down as soon as it hits 40°C, right?" - "Yeah ... but it doesnt hit 40°C ... ever  😄"

 

GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled) CPU: Ryzen 5600X (watercooled) RAM: 32GB 3600Mhz Corsair LPX MB: Gigabyte B550i PSU: Corsair SF750 Case: Hyte Revolt 3

 

Link to comment
Share on other sites

Link to post
Share on other sites

Fiji is estimated to be around ~12% faster than the TITAN X. I don't see any backup plan other than re-branding the TITAN X and overclocking it.

In question if AMD does decide to update the architecture in their next series. Rumors suggest all re-brands while others suggest an entire new series.

Estimated ~12% faster than TITAN X based on leaked benchmarks.

Proven faked and to be 290X XFire. It's not that hard to spoof benchmarks.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I was running some random games on my 480 with DSR (the most demanding one being Insurgency, still got 25+ FPS with that maxed), and I don't think I hit the limit even then.

Minecraft really hammers a card when you've got all settings maxed out with the shaders in use. Its what I used to test the stability of my RMA'd 970's-and they all failed after a couple of hours gameplay.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

They're not going to redesign the card just to add another 4 GB of expensive HBM VRAM.

 

What is everyone's obsession with large amounts of VRAM?  Barely anything even uses 3, let alone 4.  The only time you see more than that is with lots of anti-aliasing or uncompressed texture packs.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Proven faked and to be 290X XFire. It's not that hard to spoof benchmarks.

You have a source? I'm curious of this 290X Crossfire findings proving the leaked slides wrong. Even your almighty Hassan Mujtaba took the time out to acknowledge them. The card already has 45% more shaders and is coupled with a big step in architecture revision and HBM. It's nothing short but to expect at least a ~50% faster GPU especially with the massive memory interface.

Link to comment
Share on other sites

Link to post
Share on other sites

Minecraft really hammers a card when you've got all settings maxed out with the shaders in use. Its what I used to test the stability of my RMA'd 970's-and they all failed after a couple of hours gameplay.

That's because Minecraft is a(n) *insert part here* whore. :P

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

I still have yet to go past 2GB of VRAM usage. So this is kinda weird for me.

my laptop has 512 mb of vram dedicated. 

gettin -livable- framrates on some games form about 2010-2012 at 720p x'D 

Watch out for each other. Love everyone and forgive everyone, including yourself. Forgive your anger, forgive your guilt. Your shame. Your sadness. Embrace and open up your love, your joy, your truth, and most especially your heart. 
-Jim Hensen

Link to comment
Share on other sites

Link to post
Share on other sites

They're not going to redesign the card just to add another 4 GB of expensive HBM VRAM.

 

What is everyone's obsession with large amounts of VRAM?  Barely anything even uses 3, let alone 4.  The only time you see more than that is with lots of anti-aliasing or uncompressed texture packs.

that and 4k. 

which... i dont think more than 20% of people do. 

 

6GB or 7GB would be smart. but if you're at that point why the fuck not just go 8GB. which is impractical for the moeny form what i hear.

 

just stick with 4 gig versions and we'll see. 

 

this is why i hate speculations.

Watch out for each other. Love everyone and forgive everyone, including yourself. Forgive your anger, forgive your guilt. Your shame. Your sadness. Embrace and open up your love, your joy, your truth, and most especially your heart. 
-Jim Hensen

Link to comment
Share on other sites

Link to post
Share on other sites

I am with you. I am really tempted right now to go for ultra wide + R9 390x combo but that will be hella expensive. I hope that at least ultra wide become more cheaper when 400 series will be relased and by that time A/F-Sync will be more polished.

*sittin here with my 7870 ghz edition in a system not even working*

*my only stabel computer beign an APU with HD7660G/D (whatever it says, its been switching back alot IDK why) and half a gig of VRAM*

Watch out for each other. Love everyone and forgive everyone, including yourself. Forgive your anger, forgive your guilt. Your shame. Your sadness. Embrace and open up your love, your joy, your truth, and most especially your heart. 
-Jim Hensen

Link to comment
Share on other sites

Link to post
Share on other sites

They're not going to redesign the card just to add another 4 GB of expensive HBM VRAM.

What is everyone's obsession with large amounts of VRAM? Barely anything even uses 3, let alone 4. The only time you see more than that is with lots of anti-aliasing or uncompressed texture packs.

The 8gb 290x has been demonstrated to benefit from the extra VRAM at high resolutions. We're starting to enter the 4k era, and AMD is taking steps to make their cards more appealing to that growing user base.

Ever since I started PC gaming people have always asked why we need more VRAM, and as games have evolved they've been using more and more.

No advancements were ever made by standing still. We need game developers to see that the hardware is available to push the envelope.

Intel i7 6700k @ 4.8ghz, NZXT Kraken X61, ASUS Z170 Maximus VIII Hero, (2x8GB) Kingston DDR4 2400, 2x Sapphire Nitro Fury OC+, Thermaltake 1050W

All in a Semi Truck!:

http://linustechtips.com/main/topic/519811-semi-truck-gaming-pc/#entry6905347

Link to comment
Share on other sites

Link to post
Share on other sites

I hope the TDP is huge so I can overclock the berjeezus out of it

But summer is coming where I live. I can't suffer, we're already in a drought.

Mobo: Z97 MSI Gaming 7 / CPU: i5-4690k@4.5GHz 1.23v / GPU: EVGA GTX 1070 / RAM: 8GB DDR3 1600MHz@CL9 1.5v / PSU: Corsair CX500M / Case: NZXT 410 / Monitor: 1080p IPS Acer R240HY bidx

Link to comment
Share on other sites

Link to post
Share on other sites

But summer is coming where I live. I can't suffer, we're already in a drought.

R9 390X "King Of The Hill" is rumored to have a 300w TDP from official AMD employees.

Link to comment
Share on other sites

Link to post
Share on other sites

The 8gb 290x has been demonstrated to benefit from the extra VRAM at high resolutions. We're starting to enter the 4k era, and AMD is taking steps to make their cards more appealing to that growing user base.

Ever since I started PC gaming people have always asked why we need more VRAM, and as games have evolved they've been using more and more.

No advancements were ever made by standing still. We need game developers to see that the hardware is available to push the envelope.

 

Lol no.

http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html

 

The only time there's a minor difference is with shit like 4x SSAA or uncompressed textures.  And they didn't even mention clock speed, which might be different.  When you have the 4 Gb and 8 GB model trading blows in games, it's a sign that any difference between the two is random variance and not any actual real world difference.

 

Don't get me wrong, there are benefits to more VRAM with multimonitor displays or extreme anti-aliasing etc.  But even for 4K, 4 GB is lots right now.  At some point, memory bandwidth is going to matter more than the amount of VRAM and that's where AMD is kicking ass while Nvidia is held back.  

 

AMD is not going to throw another 4 GB of expensive HBM memory onto a card because people think they need it.  They'd much rather release a cheaper 4 GB card that will still perform like a champ for 99% of users.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

You can just feel the Nvidia fanboys clenching their buttocks together in this thread. :P

Link to comment
Share on other sites

Link to post
Share on other sites

actually i dont care if its amd or nvidia all i want is something that can drive 3440x1440 well

 

Now that's a good thinking consumer right here..

Link to comment
Share on other sites

Link to post
Share on other sites

R9 390X "King Of The Hill" is rumored to have a 300w TDP from official AMD employees.

That's still way too high imo but if performance outweighs all then I guess it's justifiable. 

Mobo: Z97 MSI Gaming 7 / CPU: i5-4690k@4.5GHz 1.23v / GPU: EVGA GTX 1070 / RAM: 8GB DDR3 1600MHz@CL9 1.5v / PSU: Corsair CX500M / Case: NZXT 410 / Monitor: 1080p IPS Acer R240HY bidx

Link to comment
Share on other sites

Link to post
Share on other sites

That's still way too high imo but if performance outweighs all then I guess it's justifiable. 

I'm certain we will see an uproar by the community the day Fiji does launch with the usual "omgerd such hot card wtf burn my houze down" when really if you were to cut the stream count down to 2816 to match the R9 290X the TDP would be around 206w compared to the 290w that of the R9 290X. Which should be a decent step forward also in power efficiency (to lower TDP by 40%).

Link to comment
Share on other sites

Link to post
Share on other sites

 " But even for 4K, 4 GB is lots right now.  "

 

no...

 

It is not. its barely enough for 1440p

 

Any game that does not utilize 4gb of VRAM at a resolution of 4k is NOT taking advantage of the resolution... Assets in the game lack the texture detail that can be presented on a 4k screen and many assets in the distance are not present at all or are low quality versions as a result of current GPU and VRAM limitations.

 

4gb VRAM is HOLDING BACK modern GPU's from using high-quality assets (and as such developers and consoles with 8gb ram). I bash the VRAM limiter frequently on my 980 and am bombarded with low LOD in games... I run some AAA games at 2560x1440 as it offers no loss in fidelity vs 4k, only adds some more jaggies, everything in the distance is a low quality wash in most games anyway...

 

I commend developers who release high-quality texture packs and provide LOD adjustment sliders so that in future people can take advantage of the game in the way the developers imagined it without the hardware limitations. Games that ship with low quality assets just so that people with 2gb video cards can play them are not worth playing at anything over 1080p.

 

6gb-8gb will be adequate for 1080p for at least until this current gen of consoles is finished. as that's what they have on tap.

 

You could argue that current GPU's cant handle the quality that would see 4gb of VRAM @ 4k exceeded however when considering SLI a 4GB 980/970/290x you are limited by the VRAM.

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

no...

 

It is not. its barely enough for 1440p

 

Any game that does not utilize 4gb of VRAM at a resolution of 4k is NOT taking advantage of the resolution... Assets in the game lack the texture detail that can be presented on a 4k screen and many assets in the distance are not present at all or are low quality versions as a result of current GPU and VRAM limitations.

 

4gb VRAM is HOLDING BACK modern GPU's from using high-quality assets (and as such developers and consoles with 8gb ram). I bash the VRAM limiter frequently on my 980 and am bombarded with low LOD in games... I run some AAA games at 2560x1440 as it offers no loss in fidelity vs 4k, only adds some more jaggies, everything in the distance is a low quality wash in most games anyway...

 

 

 

 

You have no idea what you are talking about.

 

Extra texture resolution at this point is largely useless except for very large surfaces, even at 4K.  We have excellent texture compression technologies (see: https://youtu.be/7bJ-D1xXEeg ) and we're at the point where it's near impossible to even tell a difference between compressed and uncompressed textures. (See: http://wccftech.com/shadow-of-mordor-ultra-hd-texture-6gb-vram/)  

 

Larger framebuffers will not make "pop in" or texture streaming/LOD technology any less prevalent.  Larger framebuffers will not stop unimportant textures from being lower quality.  Larger framebuffers will not increase the amount of detail devs put into inaccessible or minor areas of the game.  They only exist to sell video cards to fucking idiots who don't know the first thing about how games work.  The biggest improvements are going to be in lighting and shading, not texture detail. (We've had photorealistic textures for over a decade.  Even very simple textures look amazing with proper lighting and shading.  (See: http://www.cs.berkeley.edu/~ravir/refshare.jpg )

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

You have no idea what you are talking about.

 

Extra texture resolution at this point is largely useless except for very large surfaces, even at 4K.  We have excellent texture compression technologies (see: https://youtu.be/7bJ-D1xXEeg ) and we're at the point where it's near impossible to even tell a difference between compressed and uncompressed textures. (See: http://wccftech.com/shadow-of-mordor-ultra-hd-texture-6gb-vram/)  

 

Larger framebuffers will not make "pop in" or texture streaming/LOD technology any less prevalent.  Larger framebuffers will not stop unimportant textures from being lower quality.  Larger framebuffers will not increase the amount of detail devs put into inaccessible or minor areas of the game.  They only exist to sell video cards to fucking idiots who don't know the first thing about how games work.  The biggest improvements are going to be in lighting and shading, not texture detail. (We've had photorealistic textures for over a decade.  Even very simple textures look amazing with proper lighting and shading.  (See: http://www.cs.berkeley.edu/~ravir/refshare.jpg )

 

Did you just insinuate that I am a fucking idiot? feel like getting a mod on here? VRAM is relative to requirement, having 2gb of ddr3 on an ATI HD 5450 is to sell 5450's to idiots... having 12gb of GDDR5 on a Titan X is to sell Titan X's to idiots, having 4gb of GDDR5 on a GTX980 is to save money on production.

 

I don't care about texture LOD and uncompressed textures and its impact on VRAM. There is no excuse not to use lossless compression and using uncompressed textures is daft. (do you even know what a shadow map is? its a texture you realise...)

 

I care about asset LOD... cars, trees, people, buildings... when this shit pops in, and low quality models are used in the distance it defeats a large benefit of having a higher resolution, if you do not have enough VRAM for the assets (including textures, shadow frame buffer etc...) shits going to get rough.

 

I will test this tonight... I will report back with frame rate, GPU utilization and VRAM usage in GTA5, I will reduce then LOD and population density, and we will see how that affects those three factors.

 

If you are right, VRAM & GPU utilization will stay the same and frame rate will increase, If I am right VRAM will decrease and GPU utilization and frame rate will increase.

 

Lets see if I will stand corrected or if you will be apologizing. I wanna get the facts right here, I am not afraid to admit I am wrong but I will make sure of it first.

 

on topic: 4gb or 8gb, whatever modern games require is what it should have. Games are beginning to require more than 4gb. It should have 8gb

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

Did you just insinuate that I am a fucking idiot? feel like getting a mod on here? VRAM is relative to requirement, having 2gb of ddr3 on an ATI HD 5450 is to sell 5450's to idiots... having 12gb of GDDR5 on a Titan X is to sell Titan X's to idiots, having 4gb of GDDR5 on a GTX980 is to save money on production.

 

I don't care about texture LOD and uncompressed textures and its impact on VRAM. There is no excuse not to use lossless compression and using uncompressed textures is daft. (do you even know what a shadow map is? its a texture you realise...)

 

I care about asset LOD... cars, trees, people, buildings... when this shit pops in, and low quality models are used in the distance it defeats a large benefit of having a higher resolution, if you do not have enough VRAM for the assets (including textures, shadow frame buffer etc...) shits going to get rough.

 

I will test this tonight... I will report back with frame rate, GPU utilization and VRAM usage in GTA5, I will reduce then LOD and population density, and we will see how that affects those three factors.

 

If you are right, VRAM & GPU utilization will stay the same and frame rate will increase, If I am right VRAM will decrease and GPU utilization and frame rate will increase.

 

Lets see if I will stand corrected or if you will be apologizing. I wanna get the facts right here, I am not afraid to admit I am wrong but I will make sure of it first.

 

on topic: 4gb or 8gb, whatever modern games require is what it should have. Games are beginning to require more than 4gb. It should have 8gb

 

No, I'm just saying that their marketing department works very hard to sell people a new GPU more frequently and one of the ways to do that is to make people think they need to upgrade to more VRAM when they don't.  

 

What I'm saying is that increasing games fidelity doesn't necessarily mean increasing VRAM.  Being able to move things in and out of memory very quickly as needed is much more valuable than storing it in a larger framebuffer.  This is the main perk to HBM over GDDR5 and partially why Maxwell GPU's tend to fall behind AMD's GPU's at higher resolutions.  They have less raw memory bandwidth and rely on compression to keep up.

(See: http://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/ )

 

Very, very few games actually exceed 4 GB of VRAM until you slap a ton of post-processing effects over it. (Anti-aliasing, depth of field, motion blur, chromatic abberation, the usual shit)  Even at 4K the only time I've seen anything exceed my VRAM was shadow of mordor's uncompressed texture pack.  (Going to try GTA V tonight though, so maybe that will finally do it too.)   Most games sit in the 2.5-3 GB range, while some like advanced warfare use the extra available VRAM to cache as much as possible to decrease load times and stuff.  

 

Throwing an 8 GB card in, instead of a 4 GB one, will not make any difference with pop in details because those are in-engine constraints not hardware constraints.  With console commands etc you might be able to reduce it in some games though.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

I will buy a 300 series card. It may very well be the last series of cards AMD ever makes.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

No, I'm just saying that their marketing department works very hard to sell people a new GPU more frequently and one of the ways to do that is to make people think they need to upgrade to more VRAM when they don't.  

 

What I'm saying is that increasing games fidelity doesn't necessarily mean increasing VRAM.  Being able to move things in and out of memory very quickly as needed is much more valuable than storing it in a larger framebuffer.  This is the main perk to HBM over GDDR5 and partially why Maxwell GPU's tend to fall behind AMD's GPU's at higher resolutions.  They have less raw memory bandwidth and rely on compression to keep up.

(See: http://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/ )

 

Very, very few games actually exceed 4 GB of VRAM until you slap a ton of post-processing effects over it. (Anti-aliasing, depth of field, motion blur, chromatic abberation, the usual shit)  Even at 4K the only time I've seen anything exceed my VRAM was shadow of mordor's uncompressed texture pack.  (Going to try GTA V tonight though, so maybe that will finally do it too.)   Most games sit in the 2.5-3 GB range, while some like advanced warfare use the extra available VRAM to cache as much as possible to decrease load times and stuff.  

 

Throwing an 8 GB card in, instead of a 4 GB one, will not make any difference with pop in details because those are in-engine constraints not hardware constraints.  With console commands etc you might be able to reduce it in some games though.

 

All good points.

 

I am just a little bruised by the fact that my current gen, rather expensive, flagship graphics card is starting to run out of steam...  Both Shadow of Mordor and COD:AW use up 4gb, farcry 4 also did it. GTA5 does as well. I expect the witcher 3 and starwars battlefront will both use more than 4gb VRAM at their highest fidelity

 

As above, I commend devs who build for future hardware and/or provide the ability for a user to modify those engine based constraints as they are often not well matched to 4k. Having 8gb of fast VRAM just removes another potential bottleneck (a very real one).

 

The real issue for me, is that if I want to increase those setting to ultra (or use command lines/mods to decrease the pop in) I need more GPU power... and I am happy to get more... So I have to go for SLI to share the rendering load... but both cards are going to be using just as much of their available VRAM as each other, and currently it seems that I cant actually use the very high/ultra settings in titles at 4k because I will hit the 4gb limit...

 

So by going SLI I am only improving frame rate, and given I am getting 45-50 fps in many titles with a single gtx980 it seems like a waste to just get those extra 10-15 frames... I could increase the visuals, and push the GPU's harder... but doing so regularly crosses that line into over 4gb of ram used...

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

You have a source? I'm curious of this 290X Crossfire findings proving the leaked slides wrong. Even your almighty Hassan Mujtaba took the time out to acknowledge them. The card already has 45% more shaders and is coupled with a big step in architecture revision and HBM. It's nothing short but to expect at least a ~50% faster GPU especially with the massive memory interface.

You forget not once has AMD confirmed the specs of Fiji. Everything's been rumorville up to now. Also, Hassan reported on what was living in plain sight. That's typical daily journalism.

As per the faked benches, simply Google 390x faked benchmarks. SiSoft and 3DMark were both fooled.

Also it's entirely unreasonable. Internal Bandwidth has not been a bottleneck to gaming for a very long time. It's still shader and TMU/ROP bound. The Asynchronous Shading demo is proof of this for AMD. The boost in memory bandwidth will be negligible for performance gains. As per the shader boost, scaling is not 100%! 30% boost at the same clocks is a generous estimate.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×