Jump to content

NVidia CEO and President on GTX 970

hitsu1

The next biggest part is people that do own 970s that try to justify their purchases to anonymous people on the internet. (I would jk, but I'm really not)

Nothing needs to be justified. I demonstrated as many others have, how great the card performs.

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

lol so the equivalent of what AMD did there would be Nvidia forcing the 0.5GB memory bandwidth limitation in the BIOS just to make people get the 980 instead, instead of getting the more defected GPUs to run as effectively as possible. Because that's quite a bit worse than what Nividia have announced here jussayin' :P

Well the consumers benefited from AMD's poor job at locking the cores so I wouldn't call it worse.

Link to comment
Share on other sites

Link to post
Share on other sites

You honestly believe that they don't manufacture chips intended to be 970's.

 

You're telling me, that ALL 970's, are 980's that didn't make the cut.

 

Seems like they could work on their process a little better, or just recycle those chips and try another run through.

Yes, that's exactly what we're saying. Intel's E-series chips are those which didn't make the cut to be Xeons. The same holds true for GPUs.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, that's exactly what we're saying. Intel's E-series chips are those which didn't make the cut to be Xeons. The same holds true for GPUs.

 

Isn't it generally that the chips in the middle of the wafer are going to be the BEST or as close to, and once you start radiating outwards its a toss of the dice as to how the chips will go. 

You could have an entire wafer of 4790Ks or you could have none at all. Even under modern fab techniques we don't get proper yields that would remove the need to have segregated product lines based off how well they came off the wafer. Even if we did, theres a incentive to have middle and entry tier chips that come from defective top tier chips. Sell them for less, recoup your costs and not waste silicon. 

 

Its not like AMD, Nvidia and Intel do this on purpose. They don't like seeing their wafers show up partway useless for the chip they intended on making in the first place; that doesn't mean they can't take strong advantage of the scenario and make money from it. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You honestly believe that they don't manufacture chips intended to be 970's.

 

You're telling me, that ALL 970's, are 980's that didn't make the cut.

 

Seems like they could work on their process a little better, or just recycle those chips and try another run through.

There is a reason Nvidia and AMD both release their higher tiered cards first. Then, when the enthusiasts buy those up, they work with the scraps from the batches that didn't quite hit the mark on the requirements it takes to be their highest tier, so they trim those down and reuse them as a separate card entirely. It is actually genius. It saves consumers money by still getting a relatively good product, and it helps manufacturers save money by reusing resources that they already have. 

 

This concept has been used long before i was alive, and is still being used today. It is the engineering equivalent to recycling. If they started with lower tier cards first, it would be far more difficult to add to a card, than it is to take away from it. By hitting their limits on a current generation, they can find their average performance bins, and then use the less competent cards for an entirely different market. I probably do not need to explain this any further, as there are far more knowledgeable people that could go into further details than i possibly could. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

"Feature"... It is a shitty architecture designed for one purpose: Higher yields, so fewer chips had to be scrapped/sold as even lower end. In the end, this is just about making more money for Nvidia. I cannot blame them for that, they are a company after all, but in the end, you have a worse card, that will suffer vram limitations just like the 600 and 700 series of cards did. GPU is powerful enough for current and future games, but will need more vram. I guess Nvidia (and their customers) have not learned from the last 2 generations of cards. What a pity.

 

Dying light already uses more than 3,5GB vram in 1080p, and this will only continue in the years to come (remember, only spoiled hardware reviewers, who get their cards for free, uses a card for a year max. The rest of us tend to use a card for 3-4 years before getting a new high/midend card).

 

I could get Dying Light to use 3.7GB on my 970 at 1080p and I didn't notice any kind of stutter. The performance was rock solid.

Link to comment
Share on other sites

Link to post
Share on other sites

There is a reason Nvidia and AMD both release their higher tiered cards first. Then, when the enthusiasts buy those up, they work with the scraps from the batches that didn't quite hit the mark on the requirements it takes to be their highest tier, so they trim those down and reuse them as a separate card entirely. It is actually genius. It saves consumers money by still getting a relatively good product, and it helps manufacturers save money by reusing resources that they already have. 

 

This concept has been used long before i was alive, and is still being used today. It is the engineering equivalent to recycling. If they started with lower tier cards first, it would be far more difficult to add to a card, than it is to take away from it. By hitting their limits on a current generation, they can find their average performance bins, and then use the less competent cards for an entirely different market. I probably do not need to explain this any further, as there are far more knowledgeable people that could go into further details than i possibly could. 

 

Hopefully they'll be able to get another card out of GM204 to fill in the huge gulf between the 960 and 970. The fact that there isn't one right now makes me think they must be getting really nice yields out of GM204. Otherwise they're stockpiling lots of failed 970s for what, half a year now?

Link to comment
Share on other sites

Link to post
Share on other sites

"Unfortunately, we failed to communicate this internally to our marketing team, and externally to retailers at launch."

but yet there was no miscommunication between Jen-Hsun's wallet and the retailers...?

4790k @ 4.6 (1.25 adaptive) // 2x GTX 970 stock clocks/voltage // Dominator Platnium 4x4 16G //Maximus Formula VII // WD Black1TB + 128GB 850 PRO // RM1000 // NZXT H440 // Razer Blackwidow Ultimate 2013 (MX Blue) // Corsair M95 + Steelseries QCK // Razer Adaro DJ // AOC I2757FH

Link to comment
Share on other sites

Link to post
Share on other sites

I could get Dying Light to use 3.7GB on my 970 at 1080p and I didn't notice any kind of stutter. The performance was rock solid.

Doesn't have to be stutter, could just be lower FPS. How is the rest of your system? Total biscuits non gamer Xtreme edition haswell, maxed out core 1, meaning he was CPU limited on his system. It also depends if the game just allocates the memory or actually uses it all the time. Are the settings, sans AA, maxed out?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure how far the discussion is already, and at this point i am too afraid to ask.

 

TL;DR Nvidia did a shitty statement. Basically they're saying we should better appreciate that they added 1 GB of VRAM to their 3 GB they had until now instead of complaining about the issue. Well thanks nvidia, AMD had that going a year ago. What a poor statement.

who cares...

Link to comment
Share on other sites

Link to post
Share on other sites

Hopefully they'll be able to get another card out of GM204 to fill in the huge gulf between the 960 and 970. The fact that there isn't one right now makes me think they must be getting really nice yields out of GM204. Otherwise they're stockpiling lots of failed 970s for what, half a year now?

I think they're getting nice yields, 28nm is a very mature process now.

 

Though I've always wondered where GK110 chips that aren't up to the performance of a 780 go...

AD2000x Review  Fitear To Go! 334 Review

Speakers - KEF LSX

Headphones - Sennheiser HD650, Kumitate Labs KL-Lakh

Link to comment
Share on other sites

Link to post
Share on other sites

Personally, I'm not seeing a major issue with their choice. Yeah, the last 0.5GB is not as fast. But you have it. You have 1GB more than you previously had in any generation. You now have more RAM. Shut up and be grateful that you have that extra RAM. If you didn't have it, you'd likely bitch and moan that you didn't have 4GB and that you were stuck at 3GB. But what do I know, I'm just an opinionated person on the Internet.

FX-8350 | GA-990FXA-UD3 | G.SKILL 2x8GB 1600MHz | 1TB WD RE4 | CM Hyper 212 EVO | MSI R9 290x Lightning | Corsair AX860i | Silverstone FT05B-W

Pentium G3258 | MSI Z97 PC Mate | G.SKILL 4x4GB 1066MHz | 500GB Samsung 2.5" | Stock cooler | Pending GPU | EVGA 500B | Antec DF-35

GoPro Hero 3 Silver | Netgear R7000 Nighthawk with DD-WRT | HP Officejet Pro 8610 | Canon iP110 | AudioTechnica ATR2500 USB

Downdraft cooler for mITX board (new build) | Desk mount mic stand | Pop filter | Anti-vibration mount for microphone | mITX case | 3rd monitor (matching existing 23.1" | Intel Core i7-4790K (for mITX build)

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't have to be stutter, could just be lower FPS. How is the rest of your system? Total biscuits non gamer Xtreme edition haswell, maxed out core 1, meaning he was CPU limited on his system. It also depends if the game just allocates the memory or actually uses it all the time. Are the settings, sans AA, maxed out?

 

That was the initial release of the game. It has been patched to use four threads pretty evenly and there is no more one core being pinned to 100% usage like it was when Total Biscuit made that video. I run it on a Xeon E3-1231v3 (4C/8T Haswell with 3.6GHz all cores turbo) with everything maxed out (including AA) except draw distance (it's at 50%). When I turn VSync off my 970 is at 100% usage and I'm almost always at 60FPs or higher.

Link to comment
Share on other sites

Link to post
Share on other sites

They have to cripple the last half gig because when they cut down the core, they lose three SMs and a segment of L2 cache. That segment of L2 cache and those SMs are attached to a memory controller. Now what's left has to handle two memory controllers, but it can't do so at it's absolute capacity. I'm pretty sure that's how it's described.

Actually they don't lose the SM's by cutting one L2 cache off as you can see from the picture in the 4th quarter you see both SM's are up. Which SM's are disabled just depends on which are defective.

It's basically just a binning process, one of the L2 cache is defective the 980 becomes a 970, on GM204 you can disable one L2 cache while remaining the full 256 bit for the rest of the DRAM. If this was Kepler eg a GK104 like a 670 orsomething take 4GB as an example to explain it easier, if one of the L2 cache were defective it would have nerfed the memory controller down to 192 bit for all VRAM -> literally becoming a 660 Ti which the Maxwell's Buddy interface actually prevents. Benefits for Nvidia are obviously saving costs, something every chip manufacturer is doing.

 

 

Telling people it is something that it is not also helps your sales and market share.

 

only that it is just false advertising.

If you'd google the launch specifications of the 3930K you'll find it has VT-d support, actually it was broken and later on fixed with a new stepping (C2). At the time, 3930K's were the only CPU's which could overclock while having VT-d, so pretty much a ripoff if you were looking for such a CPU. That example is a lot more close to false advertising as the feature didn't work at all while the 970 did perform as it has been reviewed. 

http://www.techpowerup.com/152978/sandy-bridge-e-vt-d-broken-in-c1-stepping-fixed-in-c2-stepping-shortly-after-launch.html

Tell me if Intel offered people a replacement when they had RMA completely in their own hands unlike Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Actually they don't lose the SM's by cutting one L2 cache off as you can see from the picture in the 4th quarter you see both SM's are up. Which SM's are disabled just depends on which are defective.

It's basically just a binning process, one of the L2 cache is defective the 980 becomes a 970, on GM204 you can disable one L2 cache while remaining the full 256 bit for the rest of the DRAM. If this was Kepler eg a GK104 like a 670 orsomething take 4GB as an example to explain it easier, if one of the L2 cache were defective it would have nerfed the memory controller down to 192 bit for all VRAM -> literally becoming a 660 Ti which the Maxwell's Buddy interface actually prevents. Benefits for Nvidia are obviously saving costs, something every chip manufacturer is doing.

 

 

If you'd google the launch specifications of the 3930K you'll find it has VT-d support, actually it was broken and later on fixed with a new stepping (C2). At the time, 3930K's were the only CPU's which could overclock while having VT-d, so pretty much a ripoff if you were looking for such a CPU. That example is a lot more close to false advertising as the feature didn't work at all while the 970 did perform as it has been reviewed. 

http://www.techpowerup.com/152978/sandy-bridge-e-vt-d-broken-in-c1-stepping-fixed-in-c2-stepping-shortly-after-launch.html

Tell me if Intel offered people a replacement when they had RMA completely in their own hands unlike Nvidia.

 

Thanks for the correction, appreciate it.

Main Rig: CPU: AMD Ryzen 7 5800X | RAM: 32GB (2x16GB) KLEVV CRAS XR RGB DDR4-3600 | Motherboard: Gigabyte B550I AORUS PRO AX | Storage: 512GB SKHynix PC401, 1TB Samsung 970 EVO Plus, 2x Micron 1100 256GB SATA SSDs | GPU: EVGA RTX 3080 FTW3 Ultra 10GB | Cooling: ThermalTake Floe 280mm w/ be quiet! Pure Wings 3 | Case: Sliger SM580 (Black) | PSU: Lian Li SP 850W

 

Server: CPU: AMD Ryzen 3 3100 | RAM: 32GB (2x16GB) Crucial DDR4 Pro | Motherboard: ASUS PRIME B550-PLUS AC-HES | Storage: 128GB Samsung PM961, 4TB Seagate IronWolf | GPU: AMD FirePro WX 3100 | Cooling: EK-AIO Elite 360 D-RGB | Case: Corsair 5000D Airflow (White) | PSU: Seasonic Focus GM-850

 

Miscellaneous: Dell Optiplex 7060 Micro (i5-8500T/16GB/512GB), Lenovo ThinkCentre M715q Tiny (R5 2400GE/16GB/256GB), Dell Optiplex 7040 SFF (i5-6400/8GB/128GB)

Link to comment
Share on other sites

Link to post
Share on other sites

 

Actually they don't lose the SM's by cutting one L2 cache off as you can see from the picture in the 4th quarter you see both SM's are up. Which SM's are disabled just depends on which are defective.

It's basically just a binning process, one of the L2 cache is defective the 980 becomes a 970, on GM204 you can disable one L2 cache while remaining the full 256 bit for the rest of the DRAM. If this was Kepler eg a GK104 like a 670 orsomething take 4GB as an example to explain it easier, if one of the L2 cache were defective it would have nerfed the memory controller down to 192 bit for all VRAM -> literally becoming a 660 Ti which the Maxwell's Buddy interface actually prevents. Benefits for Nvidia are obviously saving costs, something every chip manufacturer is doing.

 

 

If you'd google the launch specifications of the 3930K you'll find it has VT-d support, actually it was broken and later on fixed with a new stepping (C2). At the time, 3930K's were the only CPU's which could overclock while having VT-d, so pretty much a ripoff if you were looking for such a CPU. That example is a lot more close to false advertising as the feature didn't work at all while the 970 did perform as it has been reviewed. 

http://www.techpowerup.com/152978/sandy-bridge-e-vt-d-broken-in-c1-stepping-fixed-in-c2-stepping-shortly-after-launch.html

Tell me if Intel offered people a replacement when they had RMA completely in their own hands unlike Nvidia.

 

Correct me if i am wrong, but didn't intel also disable TSX/HLE on Haswell? I do not recall a giant backlash for that either, lol.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Because these chips have defects in them. Before they would have to cut away an entire ROP cluster, thus having to sell the chip as even lower tier. With Maxwell, they can cut just a portion of the ROP cluster, that contains the defective ROP, thus improving yields, and use lower end chips on higher end cards. Like I said, it's to make more money on poorer quality chips. And people buy them in buckets.

So with 900 series, you take 980 remove a part of an ROP cluster... and lose half a gig of vRAM then for lack of a better term shits the bed... Yet with the 780Ti you remove an entire ROP cluster  and keep the same amount of vRAM with no side effects? To me this doesn't sound like a feature of maxwell but yet another oversight the R&D dept didn't think about...  Also I know people who have experiences working with ARM and to a lesser degree intel and AMD... Not all of the cutdown chips (like 970) are defective some were fully functional  and just cutdown to meet higher demands of the lowerprice bracket. This has also been shown in the past with say 290x where you could buy  a 290 and BIOS flash it to become a 290x depending on if your chip was actually defective or just cutdown to meet a quota 

 

The reason why a 970 is a cut down 980 is because the SMMs didn't function properly, thus they were disabled. Thus creating the memory issue. 

^^ Read above response

5820k4Ghz/16GB(4x4)DDR4/MSI X99 SLI+/Corsair H105/R9 Fury X/Corsair RM1000i/128GB SM951/512GB 850Evo/1+2TB Seagate Barracudas

Link to comment
Share on other sites

Link to post
Share on other sites

That was the initial release of the game. It has been patched to use four threads pretty evenly and there is no more one core being pinned to 100% usage like it was when Total Biscuit made that video. I run it on a Xeon E3-1231v3 (4C/8T Haswell with 3.6GHz all cores turbo) with everything maxed out (including AA) except draw distance (it's at 50%). When I turn VSync off my 970 is at 100% usage and I'm almost always at 60FPs or higher.

Interesting. I assumed the 100% on core one, was due to DX11's stupid core 1 overhead. Probably is, but if a patch just lowered the games usage of core 1, that would fix it.

 

Like I said, the GPU is powerful, but the vram will be the limit in modern graphics heavy games, before the GPU will. I do wonder how much the drawdistance influence the vram usage.

 

I don't know if dying light is an AMD optimized title per se. I very much doubt it, as it used Nvidia's proprietary Gameworks middleware. In this test (though not using hbao+, but only hbao), the 290 beats a 780 with 7 FPS. That is a LOT. Biggest difference is the amount of VRAM on the two cards:

 

http://www.techspot.com/review/956-dying-light-benchmarks/page3.html

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Like I said, the GPU is powerful, but the vram will be the limit in modern graphics heavy games, before the GPU will. 

 

I doubt that. Not what i have experienced anyway. Im sure eventually games will use more than 4gb but it isnt an issue right now at reasonable settings.

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know if dying light is an AMD optimized title per se. I very much doubt it, as it used Nvidia's proprietary Gameworks middleware. In this test (though not using hbao+, but only hbao), the 290 beats a 780 with 7 FPS. That is a LOT. Biggest difference is the amount of VRAM on the two cards:

 

http://www.techspot.com/review/956-dying-light-benchmarks/page3.html

 

Nvidia's driver support is also pretty lousy for Kepler now. But overall I think the 290 is a little stronger than the 780 at the same VRAM usage.

Link to comment
Share on other sites

Link to post
Share on other sites

3.5GB is better then 3.0GB though. Unless what they are saying is bullshite. 

Exactly. Most games are capped at 3.5 anyway to prevent the performance changes. That's how this thing came about.

 Asus M5A99X Evo  - AMD FX-8350 - 16GB Corsair Vengeance 1866Mhz - Corsair 120mm Quiet Edition Fans BenQ XL2411Z- EVGA GTX 980 Superclocked Fractal Design Define R4 - Corsair H100i - 2 TB 7200rpm HDD - Samsung 840 Evo 120GB - Corsair RM750w PSU - Logitech G502 Proteus Core - Corsair K70 RGB MX Red - Audio Technica M50x + Modmic 4.0 - LG 23EA63V x2


Spinthat Spinthat Spinthat Spinthat

Link to comment
Share on other sites

Link to post
Share on other sites

 

If you insist on running Dying Light on a 780 with what are essentially 6K textures, which is what that slider positioned as far to the right as it goes pretty much means, for the sake of a 1080p display you are frankly an idiot. The other example often brought up, Shadow of Mordor, has a sodding Titan in its recommended specs (implied by its 6GB stated requirement). How about you limit your settings to what you can actually display before screaming that your GPU isn't powerful enough because you've missed an important part of what those settings actually mean.

 

Or if you'd rather not think about what you are actually doing by customising your graphics settings maybe you'd be better off sticking to consoles.

 

I hope i misunderstood you, but are you claiming that the texture/shadow/etc. -resolution, needs to match your screen resolution? Because that would be an incredibly silly thing to say! Higher resolution textures benefit lower resolutions a lot as well. In Mordor, the 6GB ultra textures, were just uncompressed high textures. Not much of a difference, and a waste of vram.

 

Why would I limit my settings, when my GFX can actually use all 4GB of vram? If you have a card that is designed in a way, that vram will be the first bottleneck in highend games, then it's a bad design. Like I said, it is NOT GPU limited, but vram limited.

 

I can customize mu settings just fine. On my 290, I just set texture resolution to ultra. It's really easy :lol:

 

Have you used a 970 or are you just going off of what everyone else who is all of a sudden anti-Nvidia is saying?

No, why would I have that? I have a 290. I'm not suddenly anything. I have always criticized Nvidia for their business strategies and ethics, as well as vram starving their 680/770/780 series of cards. I was pleasently surprised to see both 980 and 970 be 4GB cards. Now it turns out that the 970 is a 4GB* card, where the * means you will get screwed over, as soon as the games actually need more than 3,5GB.

 

So with 900 series, you take 980 remove a part of an ROP cluster... and lose half a gig of vRAM then for lack of a better term shits the bed... Yet with the 780Ti you remove an entire ROP cluster  and keep the same amount of vRAM with no side effects? To me this doesn't sound like a feature of maxwell but yet another oversight the R&D dept didn't think about...  Also I know people who have experiences working with ARM and to a lesser degree intel and AMD... Not all of the cutdown chips (like 970) are defective some were fully functional  and just cutdown to meet higher demands of the lowerprice bracket. This has also been shown in the past with say 290x where you could buy  a 290 and BIOS flash it to become a 290x depending on if your chip was actually defective or just cutdown to meet a quota 

 

^^ Read above response

 

PCPER had a really good video about it. The 780ti and 780 for that matter, could run all 3GB of vram at full speed, so not really a comparison.

 

Indeed, sometimes, supply and demand means some lower end cards, will have higher tier chips in them. That is indeed why some people could flash their reference 290 to a 290x, and some of the first haswell i5 4670k's could overclock a LOT.

 

But generally you set a lower limit to what these chips can do; and by doing this fiddly part of a ROP closter cutoff, Nvidia is able to raise the yield of 970 chips, and use, what would otherwise be a lower end chip, in a higher end card. All of this is a money making procedure, at the cost of the consumer. And the consumer seems to be dumb enough to defend Nvidia for doing it. Go figure. THAT is why lawsuits are a good thing. It does not matter if the end user only gets 15$ like in the Intel case, it matters that the company gets punishment for their dishonest actions.

 

Nvidia's driver support is also pretty lousy for Kepler now.

 

I'd say, when a 290x beats a 780ti in higher resolutions. But again, the 1½+ year old AMD cards are doing very nicely against even the 900 series.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

.......Release the LAWYERS DAMN IT!!!!.....*pets white cat, waits for lawyers to appear*

Link to comment
Share on other sites

Link to post
Share on other sites

Correct me if i am wrong, but didn't intel also disable TSX/HLE on Haswell? I do not recall a giant backlash for that either, lol.

Yes. I didn't mention it because I couldn't be bothered trying to find proof for it, I've heard it after a half year that Haswell didn't have TSX at all.

 

 

Interesting. I assumed the 100% on core one, was due to DX11's stupid core 1 overhead. Probably is, but if a patch just lowered the games usage of core 1, that would fix it.

 

Like I said, the GPU is powerful, but the vram will be the limit in modern graphics heavy games, before the GPU will. I do wonder how much the drawdistance influence the vram usage.

 

I don't know if dying light is an AMD optimized title per se. I very much doubt it, as it used Nvidia's proprietary Gameworks middleware. In this test (though not using hbao+, but only hbao), the 290 beats a 780 with 7 FPS. That is a LOT. Biggest difference is the amount of VRAM on the two cards:

 

http://www.techspot.com/review/956-dying-light-benchmarks/page3.html

Except according to your link; a 2GB card (770) does a tiny bit better than a 3GB card 7970 GHz kinda points out you didn't need more than 2GB. 

 

 

Doesn't have to be stutter, could just be lower FPS. How is the rest of your system? Total biscuits non gamer Xtreme edition haswell, maxed out core 1, meaning he was CPU limited on his system. It also depends if the game just allocates the memory or actually uses it all the time. Are the settings, sans AA, maxed out?

When the GPU is flanking at 99% usage that points out the GPU is performing at its best and thats the load you want. If you can't get such usage then your frames will be lower than what it should be, you either have a CPU bottleneck (hardly relevant here), lack of vram (which we don't) or a memory controller bottleneck which counts for the 970 only.

So far all the proof I saw people linking where youtube video's which were lacking the GPU usage, Far Cry 4 even stutters at 1500MB usage, showing artifacts not that this proves anything etc. I'm pretty sure you'll find those kind of video's for the 290x as well, basically for every GPU.

Thats my 970, I needed 4x MSAA (which I would never use on 4K monitor) to reach that 4GB buffer and there's no single dip in my GPU usage perfectly pointing out I'm GPU limited. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×