Jump to content

GK110 Was Never Meant To be Used In A GTX680-Like Product.

This post is in direct response to @LinusTech's & @Slick's last WAN show, in which Linus sited the long old rumor that GK110 was supposed to be used in what would have been the GTX 680.
This rumor is completely false for several reasons.

GK110 was never meant to be a consumer product, it was engineered to be a compute accelerator from the beginning, this is evidenced by numerous architectural differences between it & all the other consumer Kepler GPUs (GK104, GK106 & GK107).
http://semiaccurate.com/2012/10/15/will-nvidia-make-a-consumer-gk110-card/

 

With GK110, the 48KB texture cache is unlocked for compute workloads. In compute the texture cache becomes a read-only cache, specializing in unaligned memory access workloads.
Furthermore GK110 has error detection capabilities that would make it safer for workloads that rely on ECC. The register per thread is also doubled with 255 registers per thread.


Another important point is that GK110 not only was never meant to compete with AMD's Radeon consumer product lineup in 2012 , but it actually couldn't... the yields were so low it was impossible.

Josh Walrath from PCPer.com even calculated the yields for the GK110.

Over 50% of GK110 chips on a 300mm wafer would come out fully defective & unsalvageable & of those 50% working units, only a few would come out fully working with all SMX units active, the rest would remain defective & a number of SMX units on those chips would have to be disabled for the chip to work.
 

Not only were the yields on GK110 a nightmare, but every single working GK110 chip was reserved to be used in the Titan super-computer, this alone added months and months of delay for GK110 to be even considered commercially viable to produce & use in a consumer product.

The first GK110 based consumer product launched was the GTX Titan, launched in Febuary of 2013, that's 13 months after AMD had released the 7970, so even IF Nvidia had intended to use GK110 for the GTX680 it would have come out more than a year late after the competition, which is commercial suicide.

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting. Explains a bit behind the pricing. 

Link to comment
Share on other sites

Link to post
Share on other sites

GK110 never meant to be used as Geforce card....this must be right, after all it was posted by GPUXPert, he's some kind of expert.

-The Bellerophon- Obsidian 550D-i5-3570k@4.5Ghz -Asus Sabertooth Z77-16GB Corsair Dominator Platinum 1866Mhz-x2 EVGA GTX 760 Dual FTW 4GB-Creative Sound Blaster XF-i Titanium-OCZ Vertex Plus 120GB-Seagate Barracuda 2TB- https://linustechtips.com/main/topic/60154-the-not-really-a-build-log-build-log/ Twofold http://linustechtips.com/main/topic/121043-twofold-a-dual-itx-system/ How great is EVGA? http://linustechtips.com/main/topic/110662-evga-how-great-are-they/#entry1478299

Link to comment
Share on other sites

Link to post
Share on other sites

GK110 never meant to be used as Geforce card....this must be right, after all it was posted by GPUXPert, he's some kind of expert.

If you're here to argue the facts of the matter you are welcome, but this is nothing more than a troll post, so tread carefully.

Link to comment
Share on other sites

Link to post
Share on other sites

If you're here to argue the facts of the matter you are welcome, but this is nothing more than a troll post, so tread carefully.

Not really here to argue or troll your thread. What you said makes a good bit of sense, I just got a kick out of your name, it is a bit aspirational to say the least. Also I'll tread however I please, if the mods deem my behavior unfriendly or disruptive so be it that can do what they wish but I'll not be told by the likes of a normal user to "tread carefully".

-The Bellerophon- Obsidian 550D-i5-3570k@4.5Ghz -Asus Sabertooth Z77-16GB Corsair Dominator Platinum 1866Mhz-x2 EVGA GTX 760 Dual FTW 4GB-Creative Sound Blaster XF-i Titanium-OCZ Vertex Plus 120GB-Seagate Barracuda 2TB- https://linustechtips.com/main/topic/60154-the-not-really-a-build-log-build-log/ Twofold http://linustechtips.com/main/topic/121043-twofold-a-dual-itx-system/ How great is EVGA? http://linustechtips.com/main/topic/110662-evga-how-great-are-they/#entry1478299

Link to comment
Share on other sites

Link to post
Share on other sites

The first GK110 card was the Titan it wasn't ready until 2013 so Nvidia had to answer the 7970 somehow and that was GK104's job.

Link to comment
Share on other sites

Link to post
Share on other sites

I think nvidia had the gk110 there just in case and when they felt under threat or one of board members got drunk they turned the gtx titan into a  thing.

cpu: intel i5 4670k @ 4.5ghz Ram: G skill ares 2x4gb 2166mhz cl10 Gpu: GTX 680 liquid cooled cpu cooler: Raijintek ereboss Mobo: gigabyte z87x ud5h psu: cm gx650 bronze Case: Zalman Z9 plus


Listen if you care.

Cpu: intel i7 4770k @ 4.2ghz Ram: G skill  ripjaws 2x4gb Gpu: nvidia gtx 970 cpu cooler: akasa venom voodoo Mobo: G1.Sniper Z6 Psu: XFX proseries 650w Case: Zalman H1

Link to comment
Share on other sites

Link to post
Share on other sites

Having some workstation features doesn't mean the chip was never meant for use as a GTX card. It could well have been intended from the start to be a GPU capable of use in both the GTX and as a Compute card. 2 birds with one stone style.

 

If it was never meant to be a consumer card either, then why did it eventually come to consumers in the form of Titan and 780 when nvidia have 680 which is right on the heels of 7970 anyway?

 

On top of that, if GK110 was never meant to be 680 then what other reason would nvidia have for the massive shortage of GK104 chips on the launch of 680 and 670 other than they simply weren't expecting to have so many GK104 based cards.

 

You bring up good points but none of them can prove that the 680/GK110 rumor is 'completely false'

 

:)

The shortages were due to the poor yields, not because they didn't expect the demand, if a 292mm chip would face shortages imagine what a nightmare making 552mm would be, keep in mind that yield percentages  don't fall linearly with die size, they fall exponentially.

So yields on a GK104 would be 357% higher than those on GK110.

Link to comment
Share on other sites

Link to post
Share on other sites

GK110 never meant to be used as Geforce card....this must be right, after all it was posted by GPUXPert, he's some kind of expert.

Yeah like mr.expert @CoolBeans

Link to comment
Share on other sites

Link to post
Share on other sites

I think nvidia had the gk110 there just in case and when they felt under threat or one of board members got drunk they turned the gtx titan into a  thing.

Did you even read the what the OP said? HE basically adressed why gk 110 was wasn't 'just there' And it wasn't ready until this year


msi h77ma g43 

i5 3470

Coolermaster hyper 212 evo push+pull

Powercolor 7950 pcs+ @1100 mhz core 1400mhz mem

Coolermaster haf 912 advanced 120mm rear, 200mm top 200mm front

Corsair vengeance 2x4gb 1600 MHZ

Windows 8 64bit

Silverstone essentials 500w ST50F-ES

Samsung dvd drive

2x 250GB Hitachi RAID0

Seagate Barracuda 2tb

LG IPS235p

Razer Blackwidow

Razer Deathadder


Link to comment
Share on other sites

Link to post
Share on other sites

It doesn't take a genius to work this out; GK110 was always their Tesla/quadro GPU so it stands to reason that the yields on those chips would be pretty low and that there would be a fairly large amount which were functional but only with SMX units disabled. GTX Titan was essentially only released because Nvidia had a tonne of partially defective GK110s they wanted to get rid of, and they realised that by rehashing them for use in a gaming environment, they'd have a pretty awesome card to bring to market. The logic then follows that you use these GK110 chips for the highest end of the 7xx series too, to stay ahead of AMD.

Aside from that, 7xx series is just a refresh of the Kepler architecture, as shown on their published product plans which have been out since before Fermi showed up.

Link to comment
Share on other sites

Link to post
Share on other sites

I covered this in the end of my "Memory bus size and how it effects VRAM usage" thread very briefly but it seems that a lot of what I sayi overlooked :P thank you for the more in depth thread on the subject.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Did you even read the what the OP said? HE basically adressed why gk 110 was wasn't 'just there' And it wasn't ready until this year

i call bs. Nvidia have lied alot in the past and they are likely to do it in the future.

cpu: intel i5 4670k @ 4.5ghz Ram: G skill ares 2x4gb 2166mhz cl10 Gpu: GTX 680 liquid cooled cpu cooler: Raijintek ereboss Mobo: gigabyte z87x ud5h psu: cm gx650 bronze Case: Zalman Z9 plus


Listen if you care.

Cpu: intel i7 4770k @ 4.2ghz Ram: G skill  ripjaws 2x4gb Gpu: nvidia gtx 970 cpu cooler: akasa venom voodoo Mobo: G1.Sniper Z6 Psu: XFX proseries 650w Case: Zalman H1

Link to comment
Share on other sites

Link to post
Share on other sites

i call bs. Nvidia have lied alot in the past and they are likely to do it in the future.

There's clear evidence there to support it - also explains why the r9 290x is superior to the Titan in many ways as it has a higher yield rate and therefore lower production costs and then a high performance too allowing AMD to sell it for cheaper and literally undercut it.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

About damn time someone clarified this, +1

This also means that Nvidia has nothing to compete with AMD at the moment unless they churn out a much improved GK104/new chip.

Link to comment
Share on other sites

Link to post
Share on other sites

a card having compute capability doesn't mean that it isn't gaming or consumer oriented.

In that matter the main difference between titan and 780, apart from the number of cuda cores, is the double precision capability. And both this cards are GK110 based.

With that said, it justifies the price of the titan. i said it before, the titan even for its price point is a good value for cad in a way where its priced between consumer and professional products. Knowing that the titan has the double precision feature of course.

GTX480 HAD DOUBLE PRECISION AND IT WAS A GOOD COMPUTING CARD, so was the 580 (Not sure about the double precision on this one though, but it performed well in compute). So both this cards were compute capable and gaming oriented.

Regarding the 480 do you remember the 460? It didn't had double precision capabilities. maybe there was a resemblance in strategy (if we asume that the 680 was ment to be 660)

This is obviously a strategic position of nvidia in segmenting their offer to different market segments.

Another thing that delayed the GK110 to enter the consumer market was the BIG order from the US Government for tesla cards (yes there was one). With that die size its hard to manufacture that chip, or slower, and when u have a pre order you will prioritize it. So it was a gamble to go out into consumer space with the gk104 only.

I will remind you that gk104 when it was lauched performed better that 7970 and it was also cheaper. Not only that the chip was cheaper to produce than the GK110 (because of that size). If the chip was meant to be mid range then it came out expensive and thats why nvidia did a launch like that knowing that it performed better than the red team.

Another evidence of that is the story behind the development of the gtx 690 and titan coolers. 

You can read it here, its a long read but interesting.

 

http://www.tomshardware.com/reviews/nvidia-history-geforce-gtx-690,3605.html

 

I dont care about AMD or NVIDIA what worries me is that amd is not pushing Nvidia. Nvidia top card is the Titan atm and its based on a card that was launch around the time 7970 was launch (the tesla one don't recall wich one though).

And now amd 290x it seems to be competing with the titan performance (i hope its better performer than titan)...so like linus said nvidia probably is yawning at this.

What i hope is that the mantle api and amd monopoly in consoles becomes a game changer and the game portability from consoles to pc is well optimized. As a result i want nvidia pushed really hard because the improvements in the last year or so isn't that impressing.

Link to comment
Share on other sites

Link to post
Share on other sites

a card having compute capability doesn't mean that it isn't gaming or consumer oriented.

In that matter the main difference between titan and 780, apart from the number of cuda cores, is the double precision capability. And both this cards are GK110 based.

With that said, it justifies the price of the titan. i said it before, the titan even for its price point is a good value for cad in a way where its priced between consumer and professional products. Knowing that the titan has the double precision feature of course.

GTX480 HAD DOUBLE PRECISION AND IT WAS A GOOD COMPUTING CARD, so was the 580 (Not sure about the double precision on this one though, but it performed well in compute). So both this cards were compute capable and gaming oriented.

Regarding the 480 do you remember the 460? It didn't had double precision capabilities. maybe there was a resemblance in strategy (if we asume that the 680 was ment to be 660)

This is obviously a strategic position of nvidia in segmenting their offer to different market segments.

Another thing that delayed the GK110 to enter the consumer market was the BIG order from the US Government for tesla cards (yes there was one). With that die size its hard to manufacture that chip, or slower, and when u have a pre order you will prioritize it. So it was a gamble to go out into consumer space with the gk104 only.

I will remind you that gk104 when it was lauched performed better that 7970 and it was also cheaper. Not only that the chip was cheaper to produce than the GK110 (because of that size). If the chip was meant to be mid range then it came out expensive and thats why nvidia did a launch like that knowing that it performed better than the red team.

Another evidence of that is the story behind the development of the gtx 690 and titan coolers. 

You can read it here, its a long read but interesting.

 

http://www.tomshardware.com/reviews/nvidia-history-geforce-gtx-690,3605.html

 

I dont care about AMD or NVIDIA what worries me is that amd is not pushing Nvidia. Nvidia top card is the Titan atm and its based on a card that was launch around the time 7970 was launch (the tesla one don't recall wich one though).

And now amd 290x it seems to be competing with the titan performance (i hope its better performer than titan)...so like linus said nvidia probably is yawning at this.

What i hope is that the mantle api and amd monopoly in consoles becomes a game changer and the game portability from consoles to pc is well optimized. As a result i want nvidia pushed really hard because the improvements in the last year or so isn't that impressing.

They just made a huge die, does that make them better for having one since the 600 launch, because believe me, AMD could have also done that but they wished to keep the die small. Also for general computing, the 7970 wipes the floor with the Titan and Double Precision makes the Titan slower in some applications.

Link to comment
Share on other sites

Link to post
Share on other sites

i call bs. Nvidia have lied alot in the past and they are likely to do it in the future.

 

Thats a pretty bold statement. I remember several years ago when AMD got caught with their TDP versus ACP(?) methods. Heres a news flash, everyone lies to some extent. No one is perfect, and if someone claims to be perfect, I'll show you a lier in one simple test. A woman asks, do I look fat in this. (I know I'd lie. My wife may be small, but her right hook is something to be feared! Ouch! lol.)

Link to comment
Share on other sites

Link to post
Share on other sites

Meh, this may be or may not be true, i mean sure the 7970 was the king last year, and what ever someone said, this is fact.

On the other hand the gk 104 is a great chip, and  the only thing nvidia did wrong with the 600 series is voltage locking and giving the cards 256 bit memory interface, AMD has much better memory bandwidth with lower clocks.

I saw people doing bios hacks to unlock the voltage regulation and the ones i saw were able to go even over 1300 mhz on the core, with just over 1.2 V ( gk 104 is 1.175v locked ) so there was a good potential for the chip from the start, MSI power edition anyone?.

The revision of the chip made some improvement, but the stupid 256 bit memory interface remained, what is even worse some manufacturers strapped 4 gigs to that same interface, the result was lower performance on stock, but after going over 2 gigs of usage the performance hit was smaller than the 2 gb card.

I believe they could have made the gk 104 perform better with more memory bandwidth and just a bit more voltage, the cards would be great. But since they already had the Tesla cards that were finished i believe it was just a trick to say that they have the fastest card around, they had to have known that they were not gonna sell a lot of them from the start, but they also made a desktop gaming/workstation hybrid that would make short work out of the competition, but come at a high price.

The thing is i don't agree or disagree with the OP, i believe that the chip was the ace Nvidia had up their sleeve, just in case, so it could have been both ways, and until Nvidia says 100% that the gk 110 it was not meant to be a 680-like like product, we can only speculate...

System

CPU: i7 4770kMotherboard: Asus Maximus VI HeroRAM: HyperX KHX318C9SRK4/32 - 32GB DDR3-1866 CL9 / GPU: Gainward Geforce GTX 670 Phantom Case: Cooler Master HAF XBStorage: 1 TB WD BluePSU: Cooler Master V-650sDisplay(s): Dell U2312HM, LG194WT, LG E1941

Cooling: Noctua NH-D15Keyboard: Logitech G710+Mouse: Logitech G502 Proteus SpectrumSound: Focusrite 2i4 - USB DAC / OS: Windows 7 (still holding on XD)

 
 
Link to comment
Share on other sites

Link to post
Share on other sites

They just made a huge die, does that make them better for having one since the 600 launch, because believe me, AMD could have also done that but they wished to keep the die small. Also for general computing, the 7970 wipes the floor with the Titan and Double Precision makes the Titan slower in some applications.

never said that 7970 wasn't a good performer in fact it shatters titan in openCL software, this is some adobe programs dont support it, Auto cad as far as i know it's the same stuff. but when cuda applications kick in the double precision shines! i use autocad a lot and i dont have 3000€ to spend in a tesla card. and i also like to game and with tesla cards the gaming isn't fine, so 1000€ is a sweet spot. it allows me to game and have double precision available for auto cad.

like i said its just nvidia addressing different segments of the market. they could have done a 780 from the start and price it for 600 bucks with the double precision disabled. but they launched titan with double precision instead...why?

So 1000 bucks kinda makes sense in a product that has capabilities between professional and gaming products. But at the same time we had those capabilities before available in the 480 for half the price and that bothers me, cause as a consumer i want to get the most out of the cheapest. but i could also understand the pricing in the business perspective.

as for the die size...well think that what made 7970 this last year so relevant in part was the pricing/performance ratio. if they did a bigger die size they would increase the production cost...so that would be against their strategy. i really dont know if amd had much margin in they're high end cards. just look at they're stock market value...

just a thought no proofs here....

but still they could do it, but they didnt.

Link to comment
Share on other sites

Link to post
Share on other sites

Meh, this may be or may not be true, i mean sure the 7970 was the king last year, and what ever someone said, this is fact.

On the other hand the gk 104 is a great chip, and  the only thing nvidia did wrong with the 600 series is voltage locking and giving the cards 256 bit memory interface, AMD has much better memory bandwidth with lower clocks.

I saw people doing bios hacks to unlock the voltage regulation and the ones i saw were able to go even over 1300 mhz on the core, with just over 1.2 V ( gk 104 is 1.175v locked ) so there was a good potential for the chip from the start, MSI power edition anyone?.

The revision of the chip made some improvement, but the stupid 256 bit memory interface remained, what is even worse some manufacturers strapped 4 gigs to that same interface, the result was lower performance on stock, but after going over 2 gigs of usage the performance hit was smaller than the 2 gb card.

I believe they could have made the gk 104 perform better with more memory bandwidth and just a bit more voltage, the cards would be great. But since they already had the Tesla cards that were finished i believe it was just a trick to say that they have the fastest card around, they had to have known that they were not gonna sell a lot of them from the start, but they also made a desktop gaming/workstation hybrid that would make short work out of the competition, but come at a high price.

The thing is i don't agree or disagree with the OP, i believe that the chip was the ace Nvidia had up their sleeve, just in case, so it could have been both ways, and until Nvidia says 100% that the gk 110 it was not meant to be a 680-like like product, we can only speculate...

well 7970 wasnt the king last here...7970 GHZ edition was same chip 2 diferent produts. 7970 GHZ edition is like a gtx 770 a powered up 680.

and talking about memory bandwith here...nvidia had on its flagship 384bit in the 580, then moved to 256bit in the 680...and then again for the titan 384 bit....why?

if in fact the gk104 was ment to be mid range, it also makes sense that the memory bandwidth was a bit caped.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Not only were the yields on GK110 a nightmare, but every single working GK110 chip was reserved to be used in the Titan super-computer, this alone added months and months of delay for GK110 to be even considered commercially viable to produce & use in a consumer product.

 

I could be wrong but didn't Nvidia use the Gk110 chip for the Titan super-computer after the fact that the 7970 was released and they knew they didn't have to compete with it at such a high level.

I didn't think it was made for an industrial level at first just after the fact of AMD's 7xxx series release.

 

About damn time someone clarified this, +1

This also means that Nvidia has nothing to compete with AMD at the moment unless they churn out a much improved GK104/new chip.

 

They have there GK110 chip that still has Cuda cores available, and Maxwell in less than half a year. 

                                                                                              Sager NP9370EM - I7 3630QM - 680m 1045Mhz - 8gb 1600mhz ram - 240gb msata 750gb hdd

Link to comment
Share on other sites

Link to post
Share on other sites

Thats a pretty bold statement. I remember several years ago when AMD got caught with their TDP versus ACP(?) methods. Heres a news flash, everyone lies to some extent. No one is perfect, and if someone claims to be perfect, I'll show you a lier in one simple test. A woman asks, do I look fat in this. (I know I'd lie. My wife may be small, but her right hook is something to be feared! Ouch! lol.)

Hey i never said AMD are perfect, I think all big companies have had to do something stupif in the past. Natural human error or just trying to appease someone. Its just my opinion and im fine if you argue against it.

cpu: intel i5 4670k @ 4.5ghz Ram: G skill ares 2x4gb 2166mhz cl10 Gpu: GTX 680 liquid cooled cpu cooler: Raijintek ereboss Mobo: gigabyte z87x ud5h psu: cm gx650 bronze Case: Zalman Z9 plus


Listen if you care.

Cpu: intel i7 4770k @ 4.2ghz Ram: G skill  ripjaws 2x4gb Gpu: nvidia gtx 970 cpu cooler: akasa venom voodoo Mobo: G1.Sniper Z6 Psu: XFX proseries 650w Case: Zalman H1

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×