Jump to content

Origin PC Was Paid, Handsomely, To Remove AMD GPU Options.

So who paid off Origin. It doesn't really seem like the thing Nvidia would do. Neither of those companies is that low. I could be wrong of course. 

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

yeah go ahead nvidia bribe some to xbox one and ps4 to change their processors too ask them to not to use anything of AMD. Let's see how much money you have :P

I really hope thats a joke because nvidia came out and said they didnt want to deal with developing the chips for the consoles or something like that.

Link to comment
Share on other sites

Link to post
Share on other sites

@Lukiose the GK110 wasnt in a quadro until like last month or possibly the month before, it was first used in the Tesla K20.

Aight, shoot me for that. But the point is, the Titan is not supposed to be the 680. The GK110 is a chip designed for extremely high compute cards, those that come out of production half-botched will be sold as the Titan.

Nvidia likely can't churn out their 800 series before Q1 2014 or anytime within the next 2 months which is why they had to resort to these kind of crap, but i think this pretty much backfired.

 

 

I really hope thats a joke because nvidia came out and said they didnt want to deal with developing the chips for the consoles or something like that.

It's pretty much sour grapes really...

Link to comment
Share on other sites

Link to post
Share on other sites

So who paid off Origin. It doesn't really seem like the thing Nvidia would do. Neither of those companies is that low. I could be wrong of course.

So who paid off Origin. It doesn't really seem like the thing Nvidia would do. Neither of those companies is that low. I could be wrong of course.

You can view it this way they tightened their partnership. Origin dropped AMD because they would now have a significantly better handle on Nvidia products and they want to provide the best for their consumers.

Link to comment
Share on other sites

Link to post
Share on other sites

@beebskadoo What you said makes no sense at all, this is because a GK110 chip is a large chip used for a Quadro. Those that come out half-fucked will be sold as the Titan, with half of the cores disabled - power consumption will likewise decrease. This however doesn't change the fact that a large die like the GK110 is immensely expensive to produce. Why else do you think a Quadro card costs upwards of $2000 to $4000?

Half of the cores disabled? Titan only has 1 SMX unit disabled, but it retains a lot of the GPU compute functionality that the Quadro cards have. So what you said makes no sense.

I was not argueing rice here, I was speaking of sheer performance. 

Quadro cards costs so much because it's geared towards a very specific market, those in that market are willing to pay for the best. That's why it's pricey. Not saying that R&D isn't involved or anything like that.

If AMD had released a part that performed on par with GK110 ( as Nvidia had sort of expected ) Nvidia would have eaten the cost in order to release it in the proper time frame and at a competitive price.

Keep in mind, Titan has been out for 9 months. It took AMD 9 months to release something that outperforms Titan and Nvidia is at max 3 months away from another refresh. 

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

surely this isn't allowed under anticompetitive laws?

Link to comment
Share on other sites

Link to post
Share on other sites

yeah go ahead nvidia bribe some to xbox one and ps4 to change their processors too ask them to not to use anything of AMD. Let's see how much money you have :P

That wasn't really ever likely. With the architecture Developers wanted it made sense for an all AMD system.

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

surely this isn't allowed under anticompetitive laws?

Sadly it happens all the time *Looks at Sony and Microsoft*

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

the 600 series came out after AMD released the 7000 series. as you know the 680 performed basically neck and neck with the 7970. Well the 680 was supposed to be the 660/660ti This was rumored from the beginning. Nvidia saw what AMD released and moved their gpu lineup a few pegs down so to speak.

Exactly why you don't want to side-grade those GTX580s you have. Frankly even a single Titan can't match your GTX580 2-way SLI. Nvidia's greatest achievement was and will remain the GTX500-series Fermi GPU. The GK110 isn't a significant jump up, and they were boasting the GTX680 could out-perform 3-way SLI GTX580s. Well as soon as they updated the drivers for the GTX500-series, they were instant liars. GTX580s perform nearly identical to the GTX680. Then they push the GK104 GPU from the GTX600-series into the GTX700-series and call is a new line-up? I'll admit, there's some optimization in there, but it's the same GPU in all but the 780 and Titan. Admittedly, now AMD is doing much the same pushing their HD7000-series GPUs into some of the R-series cards, and having a newly designed GPU for their top-end.

Here we have Nvidia, doing all they can to BS their customers and even try to buy off retailers to remove AMD from their line-up; And here we have AMD, pushing innovation any and every way they can through thick and through thin all while being constantly berated by loyalty bought by their competition.

More details on your point as well: Nvidia was having difficulty getting their arcitecture down to 28nm like AMD did. The reason GK110 wasn't launched as their flagship when the GTX600-series came out, was because it wasn't done yet, and they were in a rush to stop losing market share to AMD. They poured LOTs of money into marketing, hype, fake benchmarks, public endorsement, and more just to get their faces back in the market on top of AMD's overwhelming performance compared to the cost difference. GK110 was delayed because they couldn't get it ready and working fast enough and were losing market share to AMD quickly. Yes, the GTX700-series is what the GTX600-series should've been, but imagine if they had waited that long to get the GTX600-series out. They'ld have likely gone under by now.

http://www.3dmark.com/3dm/6145146?
This is how you own price to performance.
"Life is too precious to be wasted in misery." -Me.

Link to comment
Share on other sites

Link to post
Share on other sites

Half of the cores disabled? Titan only has 1 SMX unit disabled, but it retains a lot of the GPU compute functionality that the Quadro cards have. So what you said makes no sense.

I was not argueing rice here, I was speaking of sheer performance. 

Quadro cards costs so much because it's geared towards a very specific market, those in that market are willing to pay for the best. That's why it's pricey. Not saying that R&D isn't involved or anything like that.

If AMD had released a part that performed on par with GK110 ( as Nvidia had sort of expected ) Nvidia would have eaten the cost in order to release it in the proper time frame and at a competitive price.

Keep in mind, Titan has been out for 9 months. It took AMD 9 months to release something that outperforms Titan and Nvidia is at max 3 months away from another refresh. 

http://www.tomshardware.com/answers/id-1817724/nvidia-disable-smx-gtx-titan.html

Alright, not half the cores disabled.

You still don't get the point, (After 3 pages) basically, AMD did not produce the R9 series with the aim of taking down the GK110 crown at all. What they accomplished however, is a card the outperforms the Titan, with a significantly smaller die size and is manufactured to be such - a 'gaming grade' level chip. Read the link up top, the thing is, Nvidia has nothing to compete with the R9 series at the moment, the GK110 is produced as a workstation/server grade chip, defects will be sold as the 780/Titan. This means that the GK110 chips are much more expensive to produce than AMD's HawaiiXT and yet still performs beneath it once the R9-290X is out if early benchmarks are true.[780/Titan grade, not a full fledged GK110]

And from previous roadmaps, the 800 series is planned to be out in Q1 2014, even if they tried to push the schedule ahead, unless Nvidia already has the 800 ready for release in a month [Hint: They're not] the earliest we might see a 800 would be in December. So there is every reason for Nvidia to be behind this nonsense to sidetrack AMD's success at the moment until they can churn out a competing product. Nothing is confirmed yet so... i shall hold judgment aside. (Thread connection made :P)

-Thread derail ended-

Link to comment
Share on other sites

Link to post
Share on other sites

680 had a much smaller die size @ 294mm2, lower TDP, Lower power consumption, was quieter and ran cooler than 7970 and 7970Ghz.

GK110 was the standard that was set for the top tier card and AMD launched well below that standard and Nvidia was still able to beat it.

Yes the die size on GK110 was big, but they were still able to keep the TDP @250W matching the 7970.

You could look at it a number of ways. I;m just presenting the facts, just as you are :)

I want to point out a few errors in your statements first, a Titan does in fact consume more power than a 7970 & it does in fact create more heat because heat is directly related to power consumption, I'm not sure where you got the impression that this was not the case unless you were quoting the Nvidia marketed TDP, which takes me back again to the Nvidia marketing argument.

53406.png

Another thing I want to point out, for the 680 to match the 7970, Nvidia had to clock it 14% higher

GTX 680 average clock : 1058mhz , source

HD 7970 average clock : 925mhz , source

At the same clock speeds, the 7970 is significantly faster and this is even before the driver breakthrough in October of last year that improved memory management & upped performance across the board by 10-15%.

keep in mind that the 1006mhz advertised clock speed is not the actual average clock speeds when running games, the actual clock speed is higher @1058mhz.

17_797ge-vs-68_big.png

So the Tahti XT is indeed larger, but it is faster & significantly so in 2560x1440, you have to remember that the reason the 7970 is much faster than the 680 in higher resolutions & with anti aliasing is because of its larger buss size, which takes more die area on the chip.

To the power consumption argument, the 680 consumes less power than the 7970 because of two things, smaller chip overall & most importantly lack of compute performance.

Even the Titan is out-classed by the 7970 in computational performance.

http://www.tomshardware.com/reviews/geforce-gtx-titan-performance-review,3442-10.html

 

Link to comment
Share on other sites

Link to post
Share on other sites

Exactly why you don't want to side-grade those GTX580s you have. Frankly even a single Titan can't match your GTX580 2-way SLI. Nvidia's greatest achievement was and will remain the GTX500-series Fermi GPU. The GK110 isn't a significant jump up, and they were boasting the GTX680 could out-perform 3-way SLI GTX580s. Well as soon as they updated the drivers for the GTX500-series, they were instant liars. GTX580s perform nearly identical to the GTX680. Then they push the GK104 GPU from the GTX600-series into the GTX700-series and call is a new line-up? I'll admit, there's some optimization in there, but it's the same GPU in all but the 780 and Titan. Admittedly, now AMD is doing much the same pushing their HD7000-series GPUs into some of the R-series cards, and having a newly designed GPU for their top-end.

Here we have Nvidia, doing all they can to BS their customers and even try to buy off retailers to remove AMD from their line-up; And here we have AMD, pushing innovation any and every way they can through thick and through thin all while being constantly berated by loyalty bought by their competition.

More details on your point as well: Nvidia was having difficulty getting their arcitecture down to 28nm like AMD did. The reason GK110 wasn't launched as their flagship when the GTX600-series came out, was because it wasn't done yet, and they were in a rush to stop losing market share to AMD. They poured LOTs of money into marketing, hype, fake benchmarks, public endorsement, and more just to get their faces back in the market on top of AMD's overwhelming performance compared to the cost difference. GK110 was delayed because they couldn't get it ready and working fast enough and were losing market share to AMD quickly. Yes, the GTX700-series is what the GTX600-series should've been, but imagine if they had waited that long to get the GTX600-series out. They'ld have likely gone under by now.

The 680 does have a legitimate 20+ fps boost in a lot of games. Do I think it would be worth an upgrade? No, but an upgrade in a year or 2 sounds pretty great. I know Nvidia was having 28nm manufacturing woes, but that doesn't mean they didn't have titan completed, it just meant that they didn't have enough cards for their gaming GPU's. Keep in mind that nearly 20,000 K20 GK110 GPU's power the TITAN supercomputer. That was 20,000 cards they had to dedicate to that. So it was a combination of things that delayed the GK110 chip from the enthusiast desktop. Nvidia could have probably launched Titan with the 600 series but supply would have been so low at first. Look at the 7970 at launch, it sold out so quick because the stock of those cards were lower than had been hoped for.

They would have only had to wati roughly 6 months (after the 600 series launch) in order to have a 'Titan' as the 680. Yeah that's a lot of time but considering it has been almost 2 years since 7000 series, it's not that long of a time. Nvidia wouldn't have went under, no way. 

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

I want to point out a few errors in your statements first, a Titan does in fact consume more power than a 7970 & it does in fact create more heat because heat is directly related to power consumption, I'm not sure where you got the impression that this was not the case unless you were quoting the Nvidia marketed TDP, which takes me back again to the Nvidia marketing argument.

53406.png

Another thing I want to point out, for the 680 to match the 7970, Nvidia had to clock it 14% higher

GTX 680 average clock : 1058mhz , source

HD 7970 average clock : 925mhz , source

At the same clock speeds, the 7970 is significantly faster and this is even before the driver breakthrough in October of last year that improved memory management.

keep in mind that the 1006mhz advertised clock speed is not the actual average clock speeds when running games, the actual clock speed is higher @1058mhz.

17_797ge-vs-68_big.png

So the Tahti XT is indeed larger, but it is faster & significantly so in 2560x1440, you have to remember that the reason the 7970 is much faster than the 680 in higher resolutions & with anti aliasing is because of its larger buss size, which takes more die area on the chip.

To the power consumption argument, the 680 consumes less power than the 7970 because of two things, smaller chip overall & most importantly lack of compute performance.

Even the Titan is out-classed by the 7970 in computational performance.

http://www.tomshardware.com/reviews/geforce-gtx-titan-performance-review,3442-10.html

 

I wasn't saying titan used less power or was cooler. I was saying 680 was. I acknowledge that the bus size on the 680 is it's Achilles heel, I understand Fermi has better compute and better bandwidth performance than Kepler.

Ok you say the average speed of the Nvidia card is higher, but the speed is higher due to GPU boost. AMD has something similar, but the Nvidia card pulls ahead because it uses less power which gives it more headroom for GPU boost. 

BTW, you compare the average clocks of the 680 vs the 7970 and the graphs you show are the 7970Ghz vs 680. which is a tangible difference.

The reason why number crunching sin't Nvidias strong point compared to the 7970, is because CUDA is Nvidia's thing and they push that first and foremost. OpenCl is an afterthought with Nvidia. However when Opencl vs CUDA comes into play, Such as with Adobe programs, CUDA destroys it.

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

I wasn't saying titan used less power or was cooler. I was saying 680 was. I acknowledge that the bus size on the 680 is it's Achilles heel, I understand Fermi has better compute and better bandwidth performance than Kepler.

Ok you say the average speed of the Nvidia card is higher, but the speed is higher due to GPU boost. AMD has something similar, but the Nvidia card pulls ahead because it uses less power which gives it more headroom for GPU boost. 

BTW, you compare the average clocks of the 680 vs the 7970 and the graphs you show are the 7970Ghz vs 680. which is a tangible difference.

The reason why number crunching sin't Nvidias strong point compared to the 7970, is because CUDA is Nvidia's thing and they push that first and foremost. OpenCl is an afterthought with Nvidia. However when Opencl vs CUDA comes into play, Such as with Adobe programs, CUDA destroys it.

I compared the 7970 & the 680 clock for clock so to speak, that's why I put the 7970 Ghz against the 680.

OpenCL is significantly faster than CUDA in AdobeCC.

I want to point out another example where the situation is reversed, i.e. AMD has the smaller but faster chip.

The HD 7870 from AMD uses a Pitcarin GPU, sized @ 212mm², the 7870 proved to be consistanty ahead of the GTX 660 in all of Linus's graphics showdowns by 10-15%, even though the GK106 chip in the 660 is sized @ 221mm² .

Link to comment
Share on other sites

Link to post
Share on other sites

The 680 does have a legitimate 20+ fps boost in a lot of games. Do I think it would be worth an upgrade? No, but an upgrade in a year or 2 sounds pretty great. I know Nvidia was having 28nm manufacturing woes, but that doesn't mean they didn't have titan completed, it just meant that they didn't have enough cards for their gaming GPU's. Keep in mind that nearly 20,000 K20 GK110 GPU's power the TITAN supercomputer. That was 20,000 cards they had to dedicate to that. So it was a combination of things that delayed the GK110 chip from the enthusiast desktop. Nvidia could have probably launched Titan with the 600 series but supply would have been so low at first. Look at the 7970 at launch, it sold out so quick because the stock of those cards were lower than had been hoped for.

They would have only had to wati roughly 6 months (after the 600 series launch) in order to have a 'Titan' as the 680. Yeah that's a lot of time but considering it has been almost 2 years since 7000 series, it's not that long of a time. Nvidia wouldn't have went under, no way. 

Well, not gone under as in bankrupt, I meant and should've said "under AMD's market share."

20+ fps? In what games? I have two GTX500-series cards - both EVGA Ultra Classified, one the 560ti-448 as a backup and the other a 580 - and the 560ti-448 scores on-par with a stock GTX670 or GTX580 in MOST tests and in-game benchmarks, while the 580 scores as well as the GTX680 and GTX770 in the same ways. I also wouldn't dare consider the GTX600-series, or the GTX700-series an upgrade based on that. Frankly, even if the GK110 was ready and could've been produced fast enough, it's still an overwrought chip with too much production cost behind it to be practical in the consumer-desktop market. Only people with too much money and not enough sense to care would or should be buying these. Nvidia is clearly getting desperate. All the signs are there. I'm just wondering if they will return to their past calling of being a workstation GPU for rendering and such, or if they truly intend to keep pushing the gaming market without trying to innovate it in any way. Frankly, I feel dirty having been lured by the hype into trying Nvidia. I will never deny their performance, but only the performance of the GTX500-series being ahead of it's time. I can't help but feel betrayed and lied to. They exaggerate their products' performance like fanboys, promoting and encouraging the attitude across all public displays. Pulling under-handed things like this only makes matters worse. I'll gladly switch back to AMD. The only thing they've every over-hyped was Bulldozer. But unlike Nvidia over-hyping their stuff, they get slammed for it. Nvidia's over-hyping just gets dismissed like it never happened. Frankly, I'm sick of it.

http://www.3dmark.com/3dm/6145146?
This is how you own price to performance.
"Life is too precious to be wasted in misery." -Me.

Link to comment
Share on other sites

Link to post
Share on other sites

I compared the 7970 & the 680 clock for clock so to speak, that's why I put the 7970 Ghz against the 680.

OpenCL is significantly faster than CUDA in AdobeCC.

I want to point out another example where the situation is reversed, i.e. AMD has the smaller but faster chip.

The HD 7870 from AMD uses a Pitcarin GPU, sized @ 212mm², the 7870 proved to be consistanty ahead of the GTX 660 in all of Linus's graphics showdowns by 10-15%, even though the GK106 chip in the 660 is sized @ 221mm² .

I can't find anything on Adobe CC performance so I have to take your word for it. But to my understanding CS6 was better with CUDA.

The 7870 was the 660ti's competitor, unless i'm completely wrong. 

 

7970-680

7950-670

7870-660ti

7850660

7790-650ti boost

7770-650ti

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

Well, not gone under as in bankrupt, I meant and should've said "under AMD's market share."

20+ fps? In what games? I have two GTX500-series cards - both EVGA Ultra Classified, one the 560ti-448 as a backup and the other a 580 - and the 560ti-448 scores on-par with a stock GTX670 or GTX580 in MOST tests and in-game benchmarks, while the 580 scores as well as the GTX680 and GTX770 in the same ways. I also wouldn't dare consider the GTX600-series, or the GTX700-series an upgrade based on that. Frankly, even if the GK110 was ready and could've been produced fast enough, it's still an overwrought chip with too much production cost behind it to be practical in the consumer-desktop market. Only people with too much money and not enough sense to care would or should be buying these. Nvidia is clearly getting desperate. All the signs are there. I'm just wondering if they will return to their past calling of being a workstation GPU for rendering and such, or if they truly intend to keep pushing the gaming market without trying to innovate it in any way. Frankly, I feel dirty having been lured by the hype into trying Nvidia. I will never deny their performance, but only the performance of the GTX500-series being ahead of it's time. I can't help but feel betrayed and lied to. They exaggerate their products' performance like fanboys, promoting and encouraging the attitude across all public displays. Pulling under-handed things like this only makes matters worse. I'll gladly switch back to AMD. The only thing they've every over-hyped was Bulldozer. But unlike Nvidia over-hyping their stuff, they get slammed for it. Nvidia's over-hyping just gets dismissed like it never happened. Frankly, I'm sick of it.

Metro last light

BF3

Dirt 3

Skyrim

just to name a few

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

I can't find anything on Adobe CC performance so I have to take your word for it. But to my understanding CS6 was better with CUDA.

The 7870 was the 660ti's competitor, unless i'm completely wrong. 

 

7970-680

7950-670

7870-660ti

7850660

7790-650ti boost

7770-650ti

7970GE-680

7970-670... so on.

http://www.3dmark.com/3dm/6145146?
This is how you own price to performance.
"Life is too precious to be wasted in misery." -Me.

Link to comment
Share on other sites

Link to post
Share on other sites

I can't find anything on Adobe CC performance so I have to take your word for it. But to my understanding CS6 was better with CUDA.

The 7870 was the 660ti's competitor, unless i'm completely wrong. 

 

7970-680

7950-670

7870-660ti

7850660

7790-650ti boost

7770-650ti

The 7870 is the competitor to the 660, the 660 released for $230 when the 7870 was selling for $240 & the 7850 was selling for $200.

Even on the architectural level, the GTX 660 has exactly 62.5% the compute unit count of the 680, the 7870 also has exactly 62.5% the compute unit count of the 7970.

Compute unit count as in, Unified Shaders & Texture Mapping units.

The fact that you thought the 660 was the competitor to the 7850 proves that AMD does have a great architecture.

Link to comment
Share on other sites

Link to post
Share on other sites

I read the other story last night (well my time :D) and I said that it made no sense to remove AMD as they would just be removing the possibility of some sales regardless of how few, they would be removing that opportunity.

IF this is true what a bunch of scumbags, for accepting the payment one and for two actually going through with it and then making that statement. I mean companies make exclusive deals all the time, why with the cloak and dagger, now they look like scum. I mean if this is correct then it makes more sense than before but.... dirty scumbags

I hope its not accurate tbh but then we would be back at it not making sense.

Never trust a man, who, when left alone with a tea cosey... Doesn't try it on. Billy Connolly
Marriage is a wonderful invention: then again, so is a bicycle repair kit. Billy Connolly
Before you judge a man, walk a mile in his shoes. After that, who cares? He's a mile away and you've got his shoes. Billy Connolly
Link to comment
Share on other sites

Link to post
Share on other sites

Metro last light

BF3

Dirt 3

Skyrim

just to name a few

Metro LL and Skyrim are flat out wrong. The 560ti-448 I have will match a 670 in both, and the 580 I use will match a 680 in both. Just a heads-up here, I'm my city's primary custom-PC builder. I benchmark every system I build for all performance vectors based on my client's usage intent. I do everything I can to make sure their PCs perform as well as they can. Meanwhile I'm on an 1100T @ 4GHz on an AM3+ 990FX, 2x8GB 1600MHz 8-8-8-24, and the afore mentioned GTX580 Ultra Classified. I've yet to see a GTX680 do more than match my PC in any build. The one GTX770 I've used came close to doing somewhat better. Nobody around here has been dumb enough to get a GTX780 or Titan that I am aware of. And the many AMD-based builds I do have me frequently lamenting my choice to try Nvidia.

http://www.3dmark.com/3dm/6145146?
This is how you own price to performance.
"Life is too precious to be wasted in misery." -Me.

Link to comment
Share on other sites

Link to post
Share on other sites

Guys take the talk of performance of the GPUs to another thread as thats not what this one is really about so its completely off topic.

Link to comment
Share on other sites

Link to post
Share on other sites

The 7870 is the competitor to the 660, the 660 released for $230 when the 7870 was selling for $240 & the 7850 was selling for $200.

Even on the architectural level, the GTX 660 has exactly 62.5% the compute unit count of the 680, the 7870 also has exactly 62.5% the compute unit count of the 7970.

Compute unit count as in, Unified Shaders & Texture Mapping units.

The fact that you thought the 660 was the competitor to the 7850 proves that AMD does have a great architecture.

The fact that you think that I don't think AMD cards have a great architecture is obviously overlooking something. I Love AMD <3 And I love their products. I am just giving a counter argument based on information that I have gathered. 

 

From an AnandTech article

7870 wins but it's within margine of error.

49733.png

Same margine of error

49737.png

7870 wins

49739.png

Tie

49743.png

660 wins but margin of error

49745.png

660 Wins

49747.png

660 wins

49749.png

Cpu bottleneck, so basically a tie

49754.png

Sorry for all of the graphs, but they trade blows but mostly stay in the margin of error according to Anandtech that is...

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

Guys take the talk of performance of the GPUs to another thread as thats not what this one is really about so its completely off topic.

The OP is in on the discussion. it's sort of his call TBH

Motherboard - Gigabyte P67A-UD5 Processor - Intel Core i7-2600K RAM - G.Skill Ripjaws @1600 8GB Graphics Cards  - MSI and EVGA GeForce GTX 580 SLI PSU - Cooler Master Silent Pro 1,000w SSD - OCZ Vertex 3 120GB x2 HDD - WD Caviar Black 1TB Case - Corsair Obsidian 600D Audio - Asus Xonar DG


   Hail Sithis!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×