Jump to content

AMD Radeon Fury X 3DMark performance

BonSie

Firstly, there isn't an actual issue with the 970.

how much you wanna bet!?! let's see what Anand has to say:

 

GTX970_ROP_575px.png

 

This in turn is why the 224GB/sec memory bandwidth number for the GTX 970 is technically correct and yet still not entirely useful as we move past the memory controllers, as it is not possible to actually get that much bandwidth at once when doing a pure read or a pure write. In the case of pure reads for example, GTX 970 can read the 3.5GB segment at 196GB/sec (7GHz * 7 ports * 32-bits), or it can read the 512MB segment at 28GB/sec, but it cannot read from both at once; it is a true XOR situation. The same is also true for writes, as only one segment can be written to at a time.

there's no issue with 970 VRAM segmentation riight  <_<

Link to comment
Share on other sites

Link to post
Share on other sites

HBM did a lot for Fiji. AMD is not marketing HBM as some type of magic ram that's going to boost performance tenfold. They are capitalizing on performance per watt. Which is where HBM really shines as they able to cut back power consumption 30-55w for the entire card just by switching to HBM. HBM2 will bring ever further performance per watt improvements with double the bandwidth in 4/8GB densities so there won't be a need for more than two stacks on this card.

please tell that to @patrickjp93 who said, and I quote:

Because more RAM != more bandwidth, or the claim AMD had to reach for HBM to match Nvidia's performance? Both are idiotic.

:lol:

Link to comment
Share on other sites

Link to post
Share on other sites

how much you wanna bet!?! let's see what Anand has to say:

 

 

there's no issue with 970 VRAM segmentation riight  <_<

Yes but Nvidia fixed most of the performance issues with drivers that conveniently put things not in constant use in that last segment to smooth over the stuttering. It's practically non-existent now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

That performance difference is purely based on latency, not bandwidth. The bus isn't the bottleneck. Sure, it may make a slight detriment, but it's not the main issue, and it's way down the list of things holding back gaming performance. Now, if you go to a compute workload and toss a Tesla K40 on PCIe 2.0 vs 3.0, the difference is obscene.

no!

the point was you said it doesn't make a difference ("bottleneck" was the exact word you used), and yet .. the difference is

Link to comment
Share on other sites

Link to post
Share on other sites

 

please tell that to @patrickjp93 who said, and I quote:

 
:lol:

 

perf/watt upgrades do not require performance upgrades, and single-component upgrades do not necessarily boost the performance of a whole system, especially when that component wasn't a bottleneck in performance anyway! Jeez, system analysis on the level needed to get the basics of all this is not difficult! Is LTT really so full of arrogant teenagers who have no clue about the tech they describe yet spout off as though those of us actually steeped in the advanced education thereof know nothing and couldn't possibly have more information and authority on the matter?

 

*rubs temples*

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

What the hell makes you think that Fiji will be the first AMD GPU to not have a FirePro version? Are you insane, especially with the compute benchmark leaks!?

God freakin dammit....I was referring to the fact that Fiji, as it is now, in its current state will not go into FirePros. It will probably be tweaked and improved a bit, then moved to professional grade.

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes but Nvidia fixed most of the performance issues with drivers that conveniently put things not in constant use in that last segment to smooth over the stuttering. It's practically non-existent now.

non-exitent, meaning they hidden the 0.5 segment to the API

Link to comment
Share on other sites

Link to post
Share on other sites

in the end HBM didn't do that much for it

You should see the size of the PCB. Additionally, wait till full HBM comes (maybe a couple generations?).

Intel i5 6600k~Asus Maximus VIII Hero~G.Skill Ripjaws 4 Series 8GB DDR4-3200 CL-16~Sapphire Radeon R9 Fury Tri-X~Phanteks Enthoo Pro M~Sandisk Extreme Pro 480GB~SeaSonic Snow Silent 750~BenQ XL2730Z QHD 144Hz FreeSync~Cooler Master Seidon 240M~Varmilo VA87M (Cherry MX Brown)~Corsair Vengeance M95~Oppo PM-3~Windows 10 Pro~http://pcpartpicker.com/p/ynmBnQ

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

I'll bet you my whole pack of gum that there isn't an issue with it.

Link to comment
Share on other sites

Link to post
Share on other sites

no!

the point was you said it doesn't make a difference ("bottleneck" was the exact word you used), and yet .. the difference is

I said the bus isn't a bottleneck. Providing a performance boost from the switch (which is entirely tied to latency and not bandwidth in this instance) does not prove it's a bottleneck. In order for a component to be a bottleneck, it must prevent the rest of the components from performing to their maximum capacity, and improvements to said components must bring forth 1:1 improvements in performance metrics. 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You should see the size of the PCB. Additionally, wait till full HBM comes (maybe a couple generations?).

xs.png

 

I'll bet you my whole pack of gum that there isn't an issue with it.

 

Throw a in a kitkat bar and you're on.

 

EDIT: mostly i just want a kitkat bar.

Somebody give me a kitkat bar.

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

non-exitent, meaning they hidden the 0.5 segment to the API

No, the performance hit/stuttering is now (nearly) non-existent. The memory is still perfectly visible, though you as the programmer can't allocate and position on a byte by byte basis under Direct X. If you had the virtual instruction set itself, you could. You generally have little control over where in the frame buffer your data ends up distributed. The DX compiler hides most of that from you

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

xs.png

 

 

Throw a in a kitkat bar and you're on.

 

I just so happen to have a twix.

Link to comment
Share on other sites

Link to post
Share on other sites

q

 

 

I play loads of games with 3GB at 1440p, only issues I've ever had are Crysis 3 @ absolutely max settings, and Shadow of mordor at max as well.

 

All my other games run fine at around 2-2.7 gb usage.

 

 

Not sure why everyone is saying Nvidia abandoned kepler, they released a driver that fixed the issues the previous driver was having.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

I play loads of games with 3GB at 1440p, only issues I've ever had are Crysis 3 @ absolutely max settings, and Shadow of mordor at max as well.

 

All my other games run fine at around 2-2.7 gb usage.

 

 

Not sure why everyone is saying Nvidia abandoned kepler, they released a driver that fixed the issues the previous driver was having.

It's not that Nvidia has abandoned Kepler. It's just a lower driver priority now. It's pretty obvious.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's not that Nvidia has abandoned Kepler. It's just a lower driver priority now. It's pretty obvious.

 

 

It happened for like one game, didn't it? Witcher 3?

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

It happened for like one game, didn't it? Witcher 3?

No, it's happening for a lot of games where the 780TI is starting to trail the 290X despite having led against it for a long time until Maxwell came out.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

you  no one minus developers already posting pictures of their units on twitter.....and that was 1-2 weeks ago

 

still we dont know anything :).

All rumors.

Link to comment
Share on other sites

Link to post
Share on other sites

No, it's happening for a lot of games where the 780TI is starting to trail the 290X despite having led against it for a long time until Maxwell came out.

 

 

Which games?  I have seen the 780 TI and 290x trade blows for quite some time now, and AMD's drivers have been becoming a bit better since the omega stuff, it could be that Nvidia's drivers aren't improving for the 780 TI, and the AMD drivers still are improving.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

Might grab a 390x for my Phenom II.

AMD Phenom II B55 Quad / unlocked dual core 4.3ghz CB R15 = CB 422
XFX R9 390 8GB MY RIG: http://uk.pcpartpicker.com/p/MVwQsY
Fastest 7770 on LTT . 3rd Fastest Phenom II Quad on LTT

PCSX2 on AMD CPU? http://linustechtips.com/main/topic/412377-pcsx2-emulator-4096x2160-amd-phenom-ii/#entry5550588

Link to comment
Share on other sites

Link to post
Share on other sites

The memory is still perfectly visible

if it's so perfectly visible, how can Dying Light only use 3.5Gb of it? in the same situation a 980 can go over  <_<

 

skip to @2:40
Link to comment
Share on other sites

Link to post
Share on other sites

if it's so perfectly visible, how can Dying Light only use 3.5Gb of it? in the same situation a 980 can go over  <_<

Dying light has more fundamental issues in its structure, and some people (even around here you can ask) can get it up to 3.9GB no problem (you never reach 4), and that was before the driver updates came out.

 

And in the same situation a 980 used to stutter as well, just less. Dying Light has (or at least had. I stopped following it b/c it's just not that great a game anyway) deeper structural issues in the way they handled the video memory.

 

This video is also way older than the fixes Nvidia released through March and April, and patches for the game have come out as well.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Dying light has more fundamental issues in its structure, and some people (even around here you can ask) can get it up to 3.9GB no problem (you never reach 4), and that was before the driver updates came out.

 

And in the same situation a 980 used to stutter as well, just less. Dying Light has (or at least had. I stopped following it b/c it's just not that great a game anyway) deeper structural issues in the way they handled the video memory.

 

This video is also way older than the fixes Nvidia released through March and April, and patches for the game have come out as well.

 

jump to abut @3:00
Link to comment
Share on other sites

Link to post
Share on other sites

 

I've seen it. The explanation is flawed and lacks insight into the design of the game (I've disassembled, decompiled, and pretty much reverse-engineered it) and it's still older than the MARCH AND APRIL fixes, not to mention the patches. This guy is to me what I am to K|NGP|N, an amateur.

 

This test has the data being accessed uniformly, not in any biased distribution such as a game texture pack. It's not a proper test for what you want to see in regards to Nvidia's handling of the memory.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

How do you know that ??

The bigger question is, how don't you know that?

 

320GB/s is still plenty. It doesn't need 640GB/s for gaming. For compute, yes, bandwidth is an issue, though the PCIe bus is a bigger issue.

But less than the 980 Ti...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×