Jump to content

AMD Radeon Fury X 3DMark performance

BonSie

Well, that explains why they won't do it.

320GB/s is still plenty. It doesn't need 640GB/s for gaming. For compute, yes, bandwidth is an issue, though the PCIe bus is a bigger issue.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

That's only a microcode change to disable it in the 780TI. It's not an architectural difference or die difference.

How do you know that ??

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

Games should not be using more than 2GB for 1080p, flat out. It's just bad programming and design. Loading every texture in the game into the frame buffer and not using even 20% of them at any time is just using up memory for no good reason.

 

Not sure about that. Remember that the resolution and number of unique textures, visible on the screen at any given time, is dramatically increasing. Also, for a high framerate, you need to have some assets preloaded, or you would need to fill up the vram, via system memory, which would cause a lot of delay. But yeah, when the new COD, uses like 6GB of vram, it's pretty wasteful.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

To be honest if I had that cash I would just push a few more mooonies into it and get a less power hungry volcanic card.

|  The United Empire of Earth Wants You | The Stormborn (ongoing build; 90% done)  |  Skyrim Mods Recommendations  LTT Blue Forum Theme! | Learning Russian! Blog |
|"They got a war on drugs so the police can bother me.”Tupac Shakur  | "Half of writing history is hiding the truth"Captain Malcolm Reynolds | "Museums are racist."Michelle Obama | "Slap a word like "racist" or "nazi" on it and you'll have an army at your back."MSM Logic | "A new command I give you: love one another. As I have loved you, so you must love one another"Jesus Christ | "I love the Union and the Constitution, but I would rather leave the Union with the Constitution than remain in the Union without it."Jefferson Davis |

Link to comment
Share on other sites

Link to post
Share on other sites

Yes and No..the 8gb gddr5 version is basically what you say....but the 4gb hbm version....*cough*

The R9 290x wont get HBM memory, The card you are thinking about is the Fury.

My Gaming PC

|| CPU: Intel i5 4690@4.3Ghz || GPU: Dual ASUS gtx 1080 Strix. || RAM: 16gb (4x4gb) Kingston HyperX Genesis 1600Mhz. || Motherboard: MSI Z97S Krait edition. || OS: Win10 Pro
________________________________________________________________

Trust me, Im an Engineer

Link to comment
Share on other sites

Link to post
Share on other sites

That 390x is not the Hbm version the hbm version has 4gb... So wait

we're talking about Fiji, not 390X !!

Link to comment
Share on other sites

Link to post
Share on other sites

How do you know that ??

Because it's been proven by overclockers and BIOS modders such as K|NGP|N and 8Pack, and it should be obvious anyway. The absolute best performing dies go into the most expensive products. The lesser dies go to consumers with some features disabled (not recognizing the 64-bit SPs saves a lot of power). Intel disables some business-oriented features in its consumer chips despite the fact the 4790K die and a Xeon E3 V3 quad are the same die.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure about that. Remember that the resolution and number of unique textures, visible on the screen at any given time, is dramatically increasing. Also, for a high framerate, you need to have some assets preloaded, or you would need to fill up the vram, via system memory, which would cause a lot of delay. But yeah, when the new COD, uses like 6GB of vram, it's pretty wasteful.

I've run memory profilers on a lot of different modern games. GPU memory is not being used properly. Lacking model aliases is another big space waste. If the utilization of textures stored in the GPU was >50% all the time, I'd be more inclined to agree with you, but the truth is the usage in modern titles is about 20% at any one time. The textures can be streamed in and out preemptively if you're about to change areas/maps. The programming should be good enough to anticipate that.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I read some think i didn't think before..... The PCI-e bottleneck with Fury

Compaq EXS MiniTower | Intel Celeron 800Mhz CPU | nVidia TNT 2 | 128 MB Ram | 4GB HDD

Costum Build | Intel Celeron 1.7 GHz | nVidia Quadro 980 xgl | VIA Motherboard l 128 MB | 1 GB Ram | 80 GB HDD  

Dell OptiPlex| Intel Pentium 4 3.0 GHz HT| Intergrated Graphics| 1.5 GB Ram| 320 GB HDD

Costum Build 2012| Intel i5-3330| Asus GT 630| ASUS P8-B75-M LX PLUS| 500 BG HDD

Fijutsu Siemens Amilo Pi 1505

Toshiba A205-S4577

DELL Ispiron 15r N5110

HP Pavilion G6 2012

HP ProBook 450 G1 | Intel Core i5-4200M | AMD Radeon 8750M | 4GB 1600MHz Ram | 750 GB HDD

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Not everyone is rich

 

so why didn't you go with AMD lel

Abigail: Intel Core i7-4790k @ 4.5GHz 1.170v / EVGA Nvidia GeForce GTX 980 Ti Classified  / ASRock Z97 Extreme6 / Corsair H110i GT / 4x4Gb G.Skill Ares 1866MHz @ CAS9 / Samsung 840 EVO 250Gb SSD / Seagate Barracuda 1TB 7200RPM / NZXT H440 Blue / EVGA SuperNOVA 750w G2

Peripherals: BenQ XL2411z 24" 144hz 1080p / ASUS VG248QE 24" 144Hz 1080p / Corsair Vengeance K70 RGB / Logitech G502 / Sennheiser HD650 / Schiit Audio Modi 2 / Magni 2 / Blue Yeti Blackout
Link to comment
Share on other sites

Link to post
Share on other sites

I read some think i didn't think before..... The PCI-e bottleneck with Fury

For gaming, PCIe 2.0 isn't even fully saturated, much less PCIe 3.0. For compute, yes, PCIe is a problem. Gaming? Not so much.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Games should not be using more than 2GB for 1080p, flat out. It's just bad programming and design. Loading every texture in the game into the frame buffer and not using even 20% of them at any time is just using up memory for no good reason.

that's just BS! what about hi-res textures!? image quality?! with my previous 280X @1080p and MANTLE, Sniper Elite 3 would use about 2.7Gb of VRAM !!!!!!

what about the people who moded the shit out of Skyrim and discovered the GTX970 VRAM issue?!

 

 

Games should not be using more than 2GB

that's the biggest BS I ever heard from someone who labels himself as a PC gamer

do we actually want texture streaming from consoles on the PC?! what would be the reason to have new generation of GPUs then? why did AMD put HBM on Fiji?!

that a joke excuse for Fury's 4Gb VRAM!

yeah .. I guess we should stop buying discrete graphic cards and all should game on IGPs  <_<

Link to comment
Share on other sites

Link to post
Share on other sites

For gaming, PCIe 2.0 isn't even fully saturated, much less PCIe 3.0. For compute, yes, PCIe is a problem. Gaming? Not so much.

Idk, but I'm curious did it affect this benchmark.

Compaq EXS MiniTower | Intel Celeron 800Mhz CPU | nVidia TNT 2 | 128 MB Ram | 4GB HDD

Costum Build | Intel Celeron 1.7 GHz | nVidia Quadro 980 xgl | VIA Motherboard l 128 MB | 1 GB Ram | 80 GB HDD  

Dell OptiPlex| Intel Pentium 4 3.0 GHz HT| Intergrated Graphics| 1.5 GB Ram| 320 GB HDD

Costum Build 2012| Intel i5-3330| Asus GT 630| ASUS P8-B75-M LX PLUS| 500 BG HDD

Fijutsu Siemens Amilo Pi 1505

Toshiba A205-S4577

DELL Ispiron 15r N5110

HP Pavilion G6 2012

HP ProBook 450 G1 | Intel Core i5-4200M | AMD Radeon 8750M | 4GB 1600MHz Ram | 750 GB HDD

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

For gaming, PCIe 2.0 isn't even fully saturated, much less PCIe 3.0. For compute, yes, PCIe is a problem. Gaming? Not so much.

oh yes it is! go from PCIe 2.x to PCIe 3.0 and you will see a increase in framerate - that's proof!

and AMD .. AMD is still limited on their chipsets to PCIe 2.0 - I wonder .. will they demonstrate these cards on AMD chipsets or Intel's  :lol:

Link to comment
Share on other sites

Link to post
Share on other sites

oh yes it is! go from PCIe 2.x to PCIe 3.0 and you will see a increase in framerate - that's proof!

and AMD .. AMD is still limited on their chipsets to PCIe 2.0 - I wonder .. will they demonstrate these cards on AMD chipsets or Intel's [emoji38]

Hahaha...nah the AM4 is comming alongside Zen...so they build the whole package

Compaq EXS MiniTower | Intel Celeron 800Mhz CPU | nVidia TNT 2 | 128 MB Ram | 4GB HDD

Costum Build | Intel Celeron 1.7 GHz | nVidia Quadro 980 xgl | VIA Motherboard l 128 MB | 1 GB Ram | 80 GB HDD  

Dell OptiPlex| Intel Pentium 4 3.0 GHz HT| Intergrated Graphics| 1.5 GB Ram| 320 GB HDD

Costum Build 2012| Intel i5-3330| Asus GT 630| ASUS P8-B75-M LX PLUS| 500 BG HDD

Fijutsu Siemens Amilo Pi 1505

Toshiba A205-S4577

DELL Ispiron 15r N5110

HP Pavilion G6 2012

HP ProBook 450 G1 | Intel Core i5-4200M | AMD Radeon 8750M | 4GB 1600MHz Ram | 750 GB HDD

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hahaha...nah the AM4 is comming alongside Zen...so they build the whole package

Zen 1st needs to exist in a physical form; just like Lisa Su showed Fiji at Computex

Link to comment
Share on other sites

Link to post
Share on other sites

that's just BS! what about hi-res textures!? image quality?! with my previous 280X @1080p and MANTLE, Sniper Elite 3 would use about 2.7Gb of VRAM !!!!!!

what about the people who moded the shit out of Skyrim and discovered the GTX970 VRAM issue?!

 

 

that's the biggest BS I ever heard from someone who labels himself as a PC gamer

do we actually want texture streaming from consoles on the PC?! what would be the reason to have new generation of GPUs then? why did AMD put HBM on Fiji?!

that a joke excuse for Fury's 4Gb VRAM!

yeah .. I guess we should stop buying discrete graphic cards and all should game on IGPs  <_<

Texture compression can be lossless and still offer 40% compaction with minimal decompression overhead for the GPU side. Also, you don't need a higher-res texture than your display resolution. Seriously, you don't. That's what tiling and other replication methods are for: getting the same effect more efficiently and leaving room for even greater detail.

 

Modding the shit out of Skyrim still shouldn't need it. The problem with Skyrim is EVERYTHING is loaded ALL THE TIME! Even though it's unnecessary. If you're not using the texture for anything at the moment, and it's not coming up in the immediate area, it doesn't need to be in the VRAM!

 

I'm a gamer and I'm also a high performance computing programmer. I'm very much aware of how to reduce footprints of ANY dataset. 

 

Yes. A good game should load the textures to the GPU just before they are needed, not leave them sitting there for a rainy day. Conservative design is highly beneficial and would in fact allow you to do MORE. Using your resources to the upmost is as much art as science. Don't talk down to one of the actual programming experts (relative term for @Opcode, @LukaP, and any other potential flamers) in the room.

 

Also no! Jesus! There is about 30% of performance being left on the table today by bad programming in games, and that's just the GPU side of it. 75%+ is missing on the CPU side. Do yourself a favor and sit down, shut up, and do the research before you type a single word more at any of us who challenge the bullshit tossed at us by game studios today.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

oh yes it is! go from PCIe 2.x to PCIe 3.0 and you will see a increase in framerate - that's proof!

and AMD .. AMD is still limited on their chipsets to PCIe 2.0 - I wonder .. will they demonstrate these cards on AMD chipsets or Intel's  :lol:

Did you mean just FX or AMD in general. If it was AMD in general then I would point out that the A88X chipset for the FM2+ socket is PCIe 3.0

Link to comment
Share on other sites

Link to post
Share on other sites

oh yes it is! go from PCIe 2.x to PCIe 3.0 and you will see a increase in framerate - that's proof!

and AMD .. AMD is still limited on their chipsets to PCIe 2.0 - I wonder .. will they demonstrate these cards on AMD chipsets or Intel's  :lol:

No, that's just the power of 130/128 encoding reducing the overhead per call vs. the older encoding. It has nothing to do with actual bandwidth. It's little more than a latency reduction. Increasing RAM speeds on the CPU side doesn't benefit gaming but in a couple edge cases, and the difference is practically nothing. The PCIe bus is not a bottleneck in gaming. It won't be for a very long time.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

that's just BS! what about hi-res textures!? image quality?! with my previous 280X @1080p and MANTLE, Sniper Elite 3 would use about 2.7Gb of VRAM !!!!!!

what about the people who moded the shit out of Skyrim and discovered the GTX970 VRAM issue?!

 

 

that's the biggest BS I ever heard from someone who labels himself as a PC gamer

do we actually want texture streaming from consoles on the PC?! what would be the reason to have new generation of GPUs then? why did AMD put HBM on Fiji?!

that a joke excuse for Fury's 4Gb VRAM!

yeah .. I guess we should stop buying discrete graphic cards and all should game on IGPs  <_<

 

Firstly, there isn't an actual issue with the 970. The segmented .5 gigs of VRAM is just slower than the primary partition but it doesn't actually affect anything according to my own testing as well as other LTT member's testing.

 

Secondly, that's not BS, the bit about your argument against games using more than 2gigs of VRAM at 1080p. AMD is putting HBM on their new cards because that's what companies do, they push new tech out the door for you to have. The point is, games should not be using any more VRAM above 2 gigs at 1080p; hitting almost 4 gigs is a representation of how bad the developers are at optimizing their own games, or they just don't give a shit.

 

Another reason AMD is putting new tech on their new cards (outside of the obvious being that it's new) is because more people are moving to higher resolution panels, so they're providing the tech that those users need.

 

Lastly, if you're going to have a discussion with someone, you should do so in a more humanly manner because you're coming across like an older gentlemen that I know who is new to the internet and doesn't know when to lay off the exclamation points and use proper sentence structure; he talks that way on the interwebs because he feels like his point is more important than anyone else's while he's behind a keyboard, but in person he's completely different.

Link to comment
Share on other sites

Link to post
Share on other sites

Idk, but I'm curious did it affect this benchmark.

1-2fps is expected because 3DMark does streaming, and it tests latency as well. The PCIe 3.0 bus uses 130/128 encoding vs. 2.0 using 10/8. You can send more in a single burst, reducing latency in the protocol, but it's not a bandwidth bottleneck.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

color me disappointed: all that bragging from AMD, and in the end HBM didn't do that much for it

HBM did a lot for Fiji. AMD is not marketing HBM as some type of magic ram that's going to boost performance tenfold. They are capitalizing on performance per watt. Which is where HBM really shines as they able to cut back power consumption 30-55w for the entire card just by switching to HBM. HBM2 will bring ever further performance per watt improvements with double the bandwidth in 4/8GB densities so there won't be a need for more than two stacks on this card.

Link to comment
Share on other sites

Link to post
Share on other sites

HBM did a lot for Fiji. AMD is not marketing HBM as some type of magic ram that's going to boost performance tenfold. They are capitalizing on performance per watt. Which is where HBM really shines as they able to cut back power consumption 30-55w for the entire card just by switching to HBM. HBM2 will bring ever further performance per watt improvements with double the bandwidth in 4/8GB densities so there won't be a need for more than two stacks on this card further increasing performance per watt.

It'll be interesting to see how Fiji and Greenland do in HPC with the new memory and higher efficiency.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No, that's just the power of 130/128 encoding reducing the overhead per call vs. the older encoding. It has nothing to do with actual bandwidth. It's little more than a latency reduction. Increasing RAM speeds on the CPU side doesn't benefit gaming but in a couple edge cases, and the difference is practically nothing. The PCIe bus is not a bottleneck in gaming. It won't be for a very long time.

 

At what point do you think it'll become a bottleneck?

 

I would guess never because there may be a new standard before that happens?

Link to comment
Share on other sites

Link to post
Share on other sites

At what point do you think it'll become a bottleneck?

 

I would guess never because there may be a new standard before that happens?

We'll have PCIe 4.0 thanks to the HPC space long before that most likely. There is a potential for PCIe 3 to be saturated under DX 12, but even in theory you'd need at least 6 decently OC'ed 980TI/Titan X being separately addressed by the GPU (do we know if the cards will inter-communicate and pass partial frames directly to each other over the PCIe bus, or is it back to the CPU and then to the displaying card?) for that to be possible based on today's draw call count standards for games per frame, and depending on how the multi-core handling works out, we may hit a CPU bottleneck in all of Intel's hex-core and quad-core chips, as well as all of AMD's current offerings, since that's not an operation SIMD can assist with (at least not yet, though frankly Intel should have come up with such a multi-GPU dispatch system using SIMD instructions by now. It probably exists and Intel's waiting on AMD to catch up or get ahead before releasing it).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×