Jump to content

NVidia CEO and President on GTX 970

hitsu1

I really can't believe they called it a feature...

It is a feature though. I have a 4gb card instead of a 3.5gb card or a 3gb card. Thats a feature imo.

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nobody got screwed, they got what they paid for.

You're right. I heard the 970 also performs great at 720p.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

You're right. I heard the 970 also performs great at 720p.

Performs like a 980 @ 1440p also using 4gb of vram.....mine does anyway! No stutter...............no apology needed from Nvidia imo.

 

http://youtu.be/lAdyj3NO9iA

post-107732-0-77169300-1424807604.png

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Performs like a 980 @ 1440p also using 4gb of vram.....mine does anyway! No stutter...............no apology needed from Nvidia imo.

 

http://youtu.be/lAdyj3NO9iA

Thanks for this. I'll scale back my outrage.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for this. I'll scale back my outrage.

Part of the problem is a lot of people screaming murder dont even own a 970, have never had an issue or are blaming any little issue they do have on the Vram............

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You mean you invented a way to increase the VRAM by 1gb, whereas EVGA has been offering 6gb variants of 780s and 4gb 770s for quite some time...

 

Did it not occur to you that the cards you are comparing this to use exactly double the original vram? The 6GB 780, the 4GB 770, the 3GB 580... It's an entirely different issue to double up the vram compared with finding a way to actually use all of what's already there.

Link to comment
Share on other sites

Link to post
Share on other sites

That is one Hell of a defect rate.  Someone might want to look into getting new production engineers...

 

More seriously - 

 

Consider what it costs to set up and operate production of any given die/chip set?

 

Now, having sunk your costs, what is the marginal cost of making one more chip (or a few thousand more) that you intend to gimp in order to capture sales at a lower price point?

 

And given the opportunity to do that how in the world could you justify the massive fixed costs of setting up an entire, separate operation in order to produce nothing but low priced chips?

 

It is much like the way cruise lines upgrade people into higher priced cabins in order to re-sell the mid price cabins.  Fixed costs, being well, fixed.

Defect rates are high. They only usually yield about 50 -  60% to begin with. Im not denying they gimp some chips but they definitely don't gimp them all for the hell of it.

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Part of the problem is a lot of people screaming murder dont even own a 970, have never had an issue or are blaming any little issue they do have on the Vram............

The next biggest part is people that do own 970s that try to justify their purchases to anonymous people on the internet. (I would jk, but I'm really not)

Link to comment
Share on other sites

Link to post
Share on other sites

Defect rates are high. They only usually yield about 50 -  60% to begin with.

Which does not remotely begin to account for the bulk of the market.  You'd need a defect rate of 90% + to do things as you suggest.

Link to comment
Share on other sites

Link to post
Share on other sites

Defect rates are high. They only usually yield about 50 -  60% to begin with.

 

And 60% is a bloody joy, anything above that is party time. 

 

I don't think some people here understand exactly how foundries run and how chips are actually made from the original wafers...Back in the Optron/Prescott days Intel could pull a 40% rate and still get more than AMDs 60%, only because Intel used far larger wafers to start out with, but 40% is still a shockingly low number all things considered. 

Things haven't really improved a great deal since then. 

Link to comment
Share on other sites

Link to post
Share on other sites

Which does not remotely begin to account for the bulk of the market.  You'd need a defect rate of 90% + to do things as you suggest.

You'd be surprised how few fully fledged i7's can come out of a single silicon wafer.  Not only are they pretty much limited to the very middle of the wafer to get the best chips, some of them also fail tests hence the existence of i5's

 

"Defect" sounds like the chip just doesn't work, but in the case of something like an i5, it's just an i7 that missed the mark on voltage and heat output at a specific clockspeed.  I think the thing to take away is that lower-end chips are a convenient side effect of defective high-end chips.

AD2000x Review  Fitear To Go! 334 Review

Speakers - KEF LSX

Headphones - Sennheiser HD650, Kumitate Labs KL-Lakh

Link to comment
Share on other sites

Link to post
Share on other sites

Part of the problem is a lot of people screaming murder dont even own a 970, have never had an issue or are blaming any little issue they do have on the Vram............

I returned mine because of coil whine and got myself 290x . Thing is , they exactly knew what happened and till this day they still advertise it as 4GB. It's something that cripples SLI rigs.

It's 350 euro midrange GPU on very mature 28nm . GPU market is really shitty nowadays. The fact that Nvidia is pulling such shit and OC blocking on mobile GPU is hurting consumers.

 

And this "apology" ? It only shows that they'll continue doing crap like this .

Link to comment
Share on other sites

Link to post
Share on other sites

"Feature"... It is a shitty architecture designed for one purpose: Higher yields, so fewer chips had to be scrapped/sold as even lower end. In the end, this is just about making more money for Nvidia. I cannot blame them for that, they are a company after all, but in the end, you have a worse card, that will suffer vram limitations just like the 600 and 700 series of cards did. GPU is powerful enough for current and future games, but will need more vram. I guess Nvidia (and their customers) have not learned from the last 2 generations of cards. What a pity.

 

Dying light already uses more than 3,5GB vram in 1080p, and this will only continue in the years to come (remember, only spoiled hardware reviewers, who get their cards for free, uses a card for a year max. The rest of us tend to use a card for 3-4 years before getting a new high/midend card).

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Dying light already uses more than 3,5GB vram in 1080p, and this will only continue in the years to come (remember, only spoiled hardware reviewers, who get their cards for free, uses a card for a year max. The rest of us tend to use a card for 3-4 years before getting a new high/midend card).

Then you'll have to turn settings down like the rest of us. You can't expect a video card to last forever.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh yeah, I forgot about this.

 

Realistically, while most would shit on the idea, the architecture enables this when previous models couldn't. This makes it a feature at least in my mind.

just noticed this comment and it stuck out to me, why couldn't they just find something else to disable on the 970 if maxwells known to be "modular" that wouldn't impact the vRAM amount...  and one thing almost everyone has seemingly forgotten is that this isn't a new feature as previous people posed 970 is a cutdown 980 (much like 290x vs 290 or i7 vs i5 and so on) but my question is how is still having 4GB (of segmented memory) any better than just having 3.5GB cause people have seemingly forgotten that's an option, and howcome 290 still  has 4GB of totally usable memory it still has some of it's gpu cutdown from a 290x much like 970/980 or 780 was a cutdown 780Ti they both used the same 3GB (and some 780s were even given 6GB but lets ignore that) Titan/Titan Black same thing 

5820k4Ghz/16GB(4x4)DDR4/MSI X99 SLI+/Corsair H105/R9 Fury X/Corsair RM1000i/128GB SM951/512GB 850Evo/1+2TB Seagate Barracudas

Link to comment
Share on other sites

Link to post
Share on other sites

Then you'll have to turn settings down like the rest of us. You can't expect a video card to last forever.

 

No, the point is, that these cards get vram limited before they get gpu limited. The 680/770 had more than enough perfomance to run Titanfall at ultra textures, but not enough vram. The 780 series will be limited by vram in dying light, not the gpu performance. That is the point.

 

Of course a 3-4 year old card will run at lower settings, but the settings and performance will be much lower, because of vram limitations, than gpu performance limitations.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

just noticed this comment and it stuck out to me, why couldn't they just find something else to disable on the 970 if maxwells known to be "modular" that wouldn't impact the vRAM amount...  and one thing almost everyone has seemingly forgotten is that this isn't a new feature as previous people posed 970 is a cutdown 980 (much like 290x vs 290 or i7 vs i5 and so on) but my question is how is still having 4GB (of segmented memory) any better than just having 3.5GB cause people have seemingly forgotten that's an option, and howcome 290 still  has 4GB of totally usable memory it still has some of it's gpu cutdown from a 290x much like 970/980 or 780 was a cutdown 780Ti they both used the same 3GB (and some 780s were even given 6GB but lets ignore that) Titan/Titan Black same thing 

 

Because these chips have defects in them. Before they would have to cut away an entire ROP cluster, thus having to sell the chip as even lower tier. With Maxwell, they can cut just a portion of the ROP cluster, that contains the defective ROP, thus improving yields, and use lower end chips on higher end cards. Like I said, it's to make more money on poorer quality chips. And people buy them in buckets.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

just noticed this comment and it stuck out to me, why couldn't they just find something else to disable on the 970 if maxwells known to be "modular" that wouldn't impact the vRAM amount...  and one thing almost everyone has seemingly forgotten is that this isn't a new feature as previous people posed 970 is a cutdown 980 (much like 290x vs 290 or i7 vs i5 and so on) but my question is how is still having 4GB (of segmented memory) any better than just having 3.5GB cause people have seemingly forgotten that's an option, and howcome 290 still  has 4GB of totally usable memory it still has some of it's gpu cutdown from a 290x much like 970/980 or 780 was a cutdown 780Ti they both used the same 3GB (and some 780s were even given 6GB but lets ignore that) Titan/Titan Black same thing 

The reason why a 970 is a cut down 980 is because the SMMs didn't function properly, thus they were disabled. Thus creating the memory issue. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, people, fine! It's over! Are you happy now? NVIDIA's own CEO apologized for the company's mistake and said it won't happen again. No matter what the circumstances, NVIDIA screwed up. We know that. And if it screws this up again, we can be almost certain it's a lie, so we can hate on NVIDIA then.  I don't care if it was a lie or a miscommunication this time, but we need to just stop with this. Forgive NVIDIA if you want or don't buy its products again, I don't care, but just please let it go. Enough of you people have already said what you think, and we don't need to hear it every time someone posts an article about this scandal. It's quite enough!

 

Thank you. :)

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

No, the point is, that these cards get vram limited before they get gpu limited. The 680/770 had more than enough perfomance to run Titanfall at ultra textures, but not enough vram. The 780 series will be limited by vram in dying light, not the gpu performance. That is the point.

 

Of course a 3-4 year old card will run at lower settings, but the settings and performance will be much lower, because of vram limitations, than gpu performance limitations.

 

Have you used a 970 or are you just going off of what everyone else who is all of a sudden anti-Nvidia is saying?

Link to comment
Share on other sites

Link to post
Share on other sites

The next biggest part is people that do own 970s that try to justify their purchases to anonymous people on the internet. (I would jk, but I'm really not)

Saying your product works is not trying to justify a purchase. Justifying a purchase would be trying to make a bad product seem good simply because you paid for it. The GTX 970 is not bad, it was not considered bad when it was released, it is still not a bad card even after looking at this "issue". I do not own a GTX 970, but i have two friends that both own 970's (one even went out and got another one AFTER knowing of this issue, because he wanted SLI and felt the cards were worth the money to do so anyways). 

 

Point that @Razzaa is trying to make, is that people are blaming this certain aspect of the card without being properly informed on what exactly is causing their performance defects. He showcased that the card is performing fine on a game that people often cited as not performing up to task, on settings that is considered standard by any gamers of today.

 

Less of a justification, more of a counter-argument to the people that still claim the 3.5gb issue is a serious detriment to their gaming experience. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Most of the comments in this thread seem to ignore how chips are actually manufactured.  There are going to be imperfections.  The chips with manufacturing defects are effectively "gimped" so that they still work but are sold at a lower price point.  If you have any actual engineering knowledge, then by all means state your case.

This thread is comprised of a lot of speculation and not very much fact.  Did Nvidia fuck up by not explaining the situation in the first place?  Sure.  I would go even as far as to say perhaps they didn't mention it purposefully because they figured such a discrepancy wouldn't have changed the overall performance of the card anyway.  The performance of the card is what is and performs quite well for its price point.

Speculating that the engineers could've approached the design differently when the people suggesting so have no experience in that field is a waste of time.

My PC specifications are in my profile.

Link to comment
Share on other sites

Link to post
Share on other sites

And 60% is a bloody joy, anything above that is party time.

I am not so sure about that. Doesn't anyone remember the 6950? By flashing a 7970 BIOS onto it you could unlock extra cores and the success rate was insane (well above 90%). AMD even went as far as to remove the dual BIOS (which people used as a fail safe when flashing the BIOS) because barely anyone bothered with the 7970 once it was discovered. TechPowerUp tracked the success rate and here are the results:

post-216-0-99950800-1424812019.png

 

 

 

I don't have any stats for unlocking cores on AMD's old pre-bulldozer CPus but they had a really high success rate as well. Makes me wonder what percentage of chips gets locked down even though they could have been sold as a higher end part.

post-216-0-99950800-1424812019.png

Link to comment
Share on other sites

Link to post
Share on other sites

Might be time to merge all the 970 threads together again, they are just going down the same path.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

No, the point is, that these cards get vram limited before they get gpu limited. The 680/770 had more than enough perfomance to run Titanfall at ultra textures, but not enough vram. The 780 series will be limited by vram in dying light, not the gpu performance. That is the point.

 

Of course a 3-4 year old card will run at lower settings, but the settings and performance will be much lower, because of vram limitations, than gpu performance limitations.

 

If you insist on running Dying Light on a 780 with what are essentially 6K textures, which is what that slider positioned as far to the right as it goes pretty much means, for the sake of a 1080p display you are frankly an idiot. The other example often brought up, Shadow of Mordor, has a sodding Titan in its recommended specs (implied by its 6GB stated requirement). How about you limit your settings to what you can actually display before screaming that your GPU isn't powerful enough because you've missed an important part of what those settings actually mean.

 

Or if you'd rather not think about what you are actually doing by customising your graphics settings maybe you'd be better off sticking to consoles.

 

 

I am not so sure about that. Doesn't anyone remember the 6950? By flashing a 7970 BIOS onto it you could unlock extra cores and the success rate was insane (well above 90%). AMD even went as far as to remove the dual BIOS (which people used as a fail safe when flashing the BIOS) because barely anyone bothered with the 7970 once it was discovered. TechPowerUp tracked the success rate and here are the results:

attachicon.gif Capture.PNG

 

 

 

I don't have any stats for unlocking cores on AMD's old pre-bulldozer CPus but they had a really high success rate as well. Makes me wonder what percentage of chips gets locked down even though they could have been sold as a higher end part.

 
lol so the equivalent of what AMD did there would be Nvidia forcing the 0.5GB memory bandwidth limitation in the BIOS just to make people get the 980 instead, instead of getting the more defected GPUs to run as effectively as possible. Because that's quite a bit worse than what Nividia have announced here jussayin' :P
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×