Jump to content

[UPDATE 3 - Sapphire's Reference Costs $649 USD] AMD Reveals R9 Nano Benchmarks Ahead of Launch

HKZeroFive

He is just Intel's holy crusader. He will always defend Intel, even when the situation does not call for it. He also gets overly aggressive when you ask him to supply any proof to his claims, and will only cite cryptic metaphors based on Intel's history, rather than cite a source confirming his claims. Fighting against a man with blind devotion is almost entirely futile. Luckily, i love to argue.

 

If you want to see a truly hilarious post from him, take a look at this thread, where he claims Skylake's GT3e will match the GTX 950. He genuinely believes it! 

http://linustechtips.com/main/topic/433764-intel-plans-to-support-vesas-adaptive-sync/?p=5834861

 

Notice how the man brings up Intel in a thread that has nothing to do with them, completely unprovoked. http://linustechtips.com/main/topic/436531-update-gtx-970-mitx-comparison-amd-reveals-r9-nano-benchmarks-ahead-of-launch/?p=5861464

 

Face it @patrickjp93, you have a problem. I know it is petty of me to call you out on it, but our last two fights ended with you not providing proof to your claims. Perhaps if you are shown your shortcomings, you will be less likely to fall prey to them in the future.

I provided proof based on a ew acts and mathematical extrapolation. Refute the math and you win. That said, you can't.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Where is AMD getting design wins? Consoles? That's pennies per chip which barely offset the R&D costs

You do realize that both sony and MS contributed heavily to R&D? Mostly MS because of their more complicated design.

The beauty of semi-custom is that they already had most the IP..

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

I provided proof based on a ew acts and mathematical extrapolation. Refute the math and you win. That said, you can't.

I wouldn't call pulling random numbers out of thin air "a few facts". You made the assumption that the architectural improvement from broadwell to skylake in GT3e would provide a 20% performance boost alone. Then you factored in a raw 50% increase in performance due to the 50% increase in EU's, yet the clock rates for the skylake iGPU's were toned down, meaning performance would not scale linearly. This is because the clock rates went down, from 1150mhz to 1000mhz. 

 

Let's not forget your awesome mathage here. 

 

 

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

 
First of all, the iris pro 6200 is roughly 20% behind the GTX 750 with both being at stock. Secondly, Overclocking one card and not the other is senseless in any real test, because Maxwell overclocks insanely high. Thirdly, you provided no proof of that flat 20% boost coming from the newer generation. No website or source on the net provides this number anywhere. That is what i asked you to prove, and you failed to do so. By the way, the GTX 950 is 30% faster than the GTX 750 Ti, which is already 15% faster than the GTX 750 on average. How can a 20% flat boost over the 750 make it on par with a 950? 
 
For someone who proclaims his intelligence in damn near every post you make, you sure are bad at math.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Where is AMD getting design wins? Consoles? That's pennies per chip which barely offset the R&D costs. And now AMD has a monopoly there. In laptops/tablets Kaveri and Carrizo are only selling on the low end with low margins. In dGPUs, the market is shrinking and AMD is losing market share. In accelerators, they've now fallen behind the Xeon Phi in performance, efficiency, and popularity. What's left? In CPUs, it's the same story. All AMD has is price on its side there. If you'd like to refute these claims, feel free.

 

You dismiss their ability for custom solutions right off - you assume their choice was based on price. Period.

 

It doesn't matter if it's pennies or dollars when you dismiss AMD capacity of currently producing something that no one else can offer in the market. So right off the bat, it's not price.

 

Price is one factor among many others. AMD offers solutions that perfectly serve many purpouses, and you know it. For many tasks going Intel or AMD won't make a difference.

 

In fact AMD GPUs are the proof that PRICE means jack shit against NVIDIA marketing. It's not about tech quality, because on that they are more then on par with NVIDIA - in fact AMD has the advantage on hardware for the new APIs, and game devs have been craving to use it for a long time - it's about how they sell it.

 

I don't get it, you get all tangled up with such details you completly lose track of the big picture. You shouldn't be like that. It's not good for you at all, both as person and professional. Your statement in other post about dumping AMD stocks shows it... 

Link to comment
Share on other sites

Link to post
Share on other sites

 

I wouldn't call pulling random numbers out of thin air "a few facts". You made the assumption that the architectural improvement from broadwell to skylake in GT3e would provide a 20% performance boost alone. Then you factored in a raw 50% increase in performance due to the 50% increase in EU's, yet the clock rates for the skylake iGPU's were toned down, meaning performance would not scale linearly. This is because the clock rates went down, from 1150mhz to 1000mhz. 

 

Let's not forget your awesome mathage here. 

 

 
 
First of all, the iris pro 6200 is roughly 20% behind the GTX 750 with both being at stock. Secondly, Overclocking one card and not the other is senseless in any real test, because Maxwell overclocks insanely high. Thirdly, you provided no proof of that flat 20% boost coming from the newer generation. No website or source on the net provides this number anywhere. That is what i asked you to prove, and you failed to do so. By the way, the GTX 950 is 30% faster than the GTX 750 Ti, which is already 15% faster than the GTX 750 on average. How can a 20% flat boost over the 750 make it on par with a 950? 
 
For someone who proclaims his intelligence in damn near every post you make, you sure are bad at math.

 

TBH, only being 20% behind a GTX 750 is really impressive for an iGPU without the benefit of dedicated vRAM.

 

Also, people should never assume. Otherwise they'll 'make an ass out of you and me

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

TBH, only being 20% behind a GTX 750 is really impressive for an iGPU without the benefit of dedicated vRAM.

Oh, no doubt. It blew my mind when i saw it benched and reviewed. Nobody is questioning how impressive it is. The fact that he is spreading misinformation about just how strong it is, and how strong its successor will be without providing any proof is what irks me. The iris 6200 is about 20% weaker than the GTX 750 at 1080p benchmarks on medium settings. His original claim that the next generation iris pro graphics, receiving a 20% boost in speed will allow it to rival the GTX 950 just makes no sense. Anyone that has followed the GTX 950 sees that it is 30% faster than the GTX 750 Ti, which is already 15% faster than the GTX 750.

 

Intel's iGPU's are making great strides, and it wont be too long before they have a serious impact on budget gaming machines (assuming they start putting these things on i3's and in celeron laptops), but that time is not now. With the 50% increase in EU's coming next generation, we might see the next generation Iris Pro graphics surpass the GTX 750 and maybe even rival a GTX 750 Ti. That alone will still be an impressive feat.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, no doubt. It blew my mind when i saw it benched and reviewed. Nobody is questioning how impressive it is. The fact that he is spreading misinformation about just how strong it is, and how strong its successor will be without providing any proof is what irks me. The iris 6200 is about 20% weaker than the GTX 750 at 1080p benchmarks on medium settings. His original claim that the next generation iris pro graphics, receiving a 20% boost in speed will allow it to rival the GTX 950 just makes no sense. Anyone that has followed the GTX 950 sees that it is 30% faster than the GTX 750 Ti, which is already 15% faster than the GTX 750.

 

Intel's iGPU's are making great strides, and it wont be too long before they have a serious impact on budget gaming machines (assuming they start putting these things on i3's and in celeron laptops), but that time is not now. With the 50% increase in EU's coming next generation, we might see the next generation Iris Pro graphics surpass the GTX 750 and maybe even rival a GTX 750 Ti. That alone will still be an impressive feat.

Which is only compounded by the fact of how much their iGPU have changed since the days of Intel's GMA (Graphics Media Accelerator) less than 10 years ago.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't contest that, ever.

 

 

Where is AMD getting design wins? Consoles? That's pennies per chip which barely offset the R&D costs. And now AMD has a monopoly there. In laptops/tablets Kaveri and Carrizo are only selling on the low end with low margins. In dGPUs, the market is shrinking and AMD is losing market share. In accelerators, they've now fallen behind the Xeon Phi in performance, efficiency, and popularity. What's left? In CPUs, it's the same story. All AMD has is price on its side there. If you'd like to refute these claims, feel free.

To be fair, AMD makes $15 off of every console that's produced.

This is a signature.

Link to comment
Share on other sites

Link to post
Share on other sites

 

I wouldn't call pulling random numbers out of thin air "a few facts". You made the assumption that the architectural improvement from broadwell to skylake in GT3e would provide a 20% performance boost alone. Then you factored in a raw 50% increase in performance due to the 50% increase in EU's, yet the clock rates for the skylake iGPU's were toned down, meaning performance would not scale linearly. This is because the clock rates went down, from 1150mhz to 1000mhz. 

 

Let's not forget your awesome mathage here. 

 

 
 
First of all, the iris pro 6200 is roughly 20% behind the GTX 750 with both being at stock. Secondly, Overclocking one card and not the other is senseless in any real test, because Maxwell overclocks insanely high. Thirdly, you provided no proof of that flat 20% boost coming from the newer generation. No website or source on the net provides this number anywhere. That is what i asked you to prove, and you failed to do so. By the way, the GTX 950 is 30% faster than the GTX 750 Ti, which is already 15% faster than the GTX 750 on average. How can a 20% flat boost over the 750 make it on par with a 950? 
 
For someone who proclaims his intelligence in damn near every post you make, you sure are bad at math.

 

They weren't random numbers. They were the performance ratios of various GPUs in their existing SKUs and then accounting for the performance gain of Skylake and a 50% larger SKU WITH improved eDRAM. Are you seriously this defensive, or is it actual stupidity? The facts were provided. 

 

It's not an assumption. It's already proven. Again, already taken care of since the 1050MHz GT2 of Skylake is 20% more powerful than the GT2 SKUs of Broadwell. And that's before eDRAM gets involved. I reiterate, are you just this stupid?

 

It's 15% behind on the newest benchmark rounds. You're quoting numbers almost a year old now. Iris Pro also overclocks much better than Maxwell. There are some records of it on 2.2GHz on air.

 

Anantech provides it in their benchmark of the 6700K. http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation.

 

Let's Take Iris Pro 6200 as 1.0 on the performance scale. 750 is 1.2.

950 is 1.3 * 1.2 = 1.56 in your fantasy land where the 750 is still that far ahead (which it isn't after 9 new Intel drivers since the launch).

Skylake GT3e = 1.2 (before accounting for the improvements to the eDRAM which aren't yet documented).

GT4e = 1.5 * 1.2 = 1.8, and this is all at stock clocks. Overclock the 950 against that, and the 950 will lose until the eDRAM isn't enough cache to make up the overall bandwidth and memory buffer difference.

 

Excuse me, but whose math is flawed and why? Prove me wrong, because the math doesn't lie, so either the base numbers are wrong (which I've provided evidence that they aren't), or there's some secret sauce in the 950. I eagerly await this half-baked response of yours. It's not cockiness when you actually are the best in the room. Bring someone better.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

To be fair, AMD makes $15 off of every console that's produced.

$6.50 if you actually watch the books. AMD way oversold their profit margins on that one, especially when not accounting for the actual surpluses of it all. It's a 15% margin on a low-volume product set.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You dismiss their ability for custom solutions right off - you assume their choice was based on price. Period.

 

It doesn't matter if it's pennies or dollars when you dismiss AMD capacity of currently producing something that no one else can offer in the market. So right off the bat, it's not price.

 

Price is one factor among many others. AMD offers solutions that perfectly serve many purpouses, and you know it. For many tasks going Intel or AMD won't make a difference.

 

In fact AMD GPUs are the proof that PRICE means jack shit against NVIDIA marketing. It's not about tech quality, because on that they are more then on par with NVIDIA - in fact AMD has the advantage on hardware for the new APIs, and game devs have been craving to use it for a long time - it's about how they sell it.

 

I don't get it, you get all tangled up with such details you completly lose track of the big picture. You shouldn't be like that. It's not good for you at all, both as person and professional. Your statement in other post about dumping AMD stocks shows it... 

AMD has no advantage in custom design. IBM has been king of that realm for a decade. Now Intel stands to challenge it. AMD's not even on the custom radar. It only has consoles. Anything AMD can make, Intel can make. The only difference they have is Intel's vastly better CPU architecture and AMD's more advanced graphics architecture which has been eclipsed completely by Intel's offerings in the integrated segment. AMD's only win is in budget. The FX 8350 beats I5s and I7s below 6 cores for some CAD workloads and game streaming, but that's it, and it comes down to cost, because the 8350 used to be a $350 part. Now it's $150.

 

AMD has no advantage in hardware at all other than their theoretical FLOPs numbers which no computational workload designed by the best engineers on the planet can seem to get near, because either AMD's hardware or its drivers for OpenCL suck. In DX 12 AMD has no real advantage. Asynchronous Shaders will mean nothing for the first generation of DX 12 games, and Nvidia will still win when it actually invests in its DX 12 drivers, as it will to drop the hammer right before game launches around Christmas.

 

I make money for myself. Sue me. What I say hear has no impact on the stock market. I can speak the truth here and not give a damn. The truth is AMD has no advantage outside price. If HSA takes off, maybe it will, but as difficult as it is to program in it vs. OpenMP and OpenACC, the chances of that happening are about the same as all the gas molecules in a room randomly shifting to one half of it even for a brief moment.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD has no advantage in custom design. IBM has been king of that realm for a decade. Now Intel stands to challenge it. AMD's not even on the custom radar. It only has consoles. Anything AMD can make, Intel can make. The only difference they have is Intel's vastly better CPU architecture and AMD's more advanced graphics architecture which has been eclipsed completely by Intel's offerings in the integrated segment. AMD's only win is in budget. The FX 8350 beats I5s and I7s below 6 cores for some CAD workloads and game streaming, but that's it, and it comes down to cost, because the 8350 used to be a $350 part. Now it's $150.

 

AMD has no advantage in hardware at all other than their theoretical FLOPs numbers which no computational workload designed by the best engineers on the planet can seem to get near, because either AMD's hardware or its drivers for OpenCL suck. In DX 12 AMD has no real advantage. Asynchronous Shaders will mean nothing for the first generation of DX 12 games, and Nvidia will still win when it actually invests in its DX 12 drivers, as it will to drop the hammer right before game launches around Christmas.

 

I make money for myself. Sue me. What I say hear has no impact on the stock market. I can speak the truth here and not give a damn. The truth is AMD has no advantage outside price. If HSA takes off, maybe it will, but as difficult as it is to program in it vs. OpenMP and OpenACC, the chances of that happening are about the same as all the gas molecules in a room randomly shifting to one half of it even for a brief moment.

 

 

change your title

 

Professional AMD Hater

I am impelled not to squeak like a grateful and frightened mouse, but to roar...

Link to comment
Share on other sites

Link to post
Share on other sites

rant

 

Exactly, that's why IBM won this console design era, due to their ability to offer a x86 cpu with a powerfull enough gpu in a semi custom envelope. AMD had no advantage at all - zero. They got all of the designs. But they have no advantage.

 

Do you even listen to yourself? You said anything AMD can make, Intel can make... but then you say AMD has a more advanced graphics arch... so Intel can't make it... so they can't do anything, am I right?

 

In fact in all of the three paragrafs you contradict yourself - plus you seem to not know much about DX12 api...

 

Lol sue you for narrowed vision? You only speak here YOUR TRUTH, preacher... only that, in what you belive to be the truth even if you contradict yourself... and it's just tiring you know? Pages and pages of this shit... it's like that guy that always makes weird speechs filled with non sense every sunday... it's a common practice in religion you know, insist upon the message - it works for the weak minds and the ignorants though...

 

Like I said, you should try to get out of the Intel box and look at the world. 

Link to comment
Share on other sites

Link to post
Share on other sites

Exactly, that's why IBM won this console design era, due to their ability to offer a x86 cpu with a powerfull enough gpu in a semi custom envelope. AMD had no advantage at all - zero. They got all of the designs. But they have no advantage.

 

Do you even listen to yourself? You said anything AMD can make, Intel can make... but then you say AMD has a more advanced graphics arch... so Intel can't make it... so they can't do anything, am I right?

 

In fact in all of the three paragrafs you contradict yourself - plus you seem to not know much about DX12 api...

 

Lol sue you for narrowed vision? You only speak here YOUR TRUTH, preacher... only that, in what you belive to be the truth even if you contradict yourself... and it's just tiring you know? Pages and pages of this shit... it's like that guy that always makes weird speechs filled with non sense every sunday... it's a common practice in religion you know, insist upon the message - it works for the weak minds and the ignorants though...

 

Like I said, you should try to get out of the Intel box and look at the world. 

On the graphics architecture note, unlike AMD Intel has had to design their architecture from the ground up and each generation has seen significant improvements to the point that some low end dGPU in OEM build and laptops might actually start to disappear since the newer CPU from Intel offer low power consumption and better overall performance than any of AMD's APU, which actually makes them worth the extra cost over an APU-some of which are used in computers that are actually quite expensive while performing worse when compared to a Phenom II and ATI Mobile dGPU from 2009/2010 which don't actually consume that much more power together than a single APU.

(I also noticed that the memory controller in AMD Phenom II P920 is better than that used in the A8 4555, the memory transfer rate (read and write) for 1066MHz 4GB dual channel DDR3 with the P920 is higher than that of the A8 4555 with 1600MHz 4GB Dual channel DDR3)

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

They weren't random numbers. They were the performance ratios of various GPUs in their existing SKUs and then accounting for the performance gain of Skylake and a 50% larger SKU WITH improved eDRAM. Are you seriously this defensive, or is it actual stupidity? The facts were provided. 

 

It's not an assumption. It's already proven. Again, already taken care of since the 1050MHz GT2 of Skylake is 20% more powerful than the GT2 SKUs of Broadwell. And that's before eDRAM gets involved. I reiterate, are you just this stupid?

 

It's 15% behind on the newest benchmark rounds. You're quoting numbers almost a year old now. Iris Pro also overclocks much better than Maxwell. There are some records of it on 2.2GHz on air.

 

Anantech provides it in their benchmark of the 6700K. http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation.

 

Let's Take Iris Pro 6200 as 1.0 on the performance scale. 750 is 1.2.

950 is 1.3 * 1.2 = 1.56 in your fantasy land where the 750 is still that far ahead (which it isn't after 9 new Intel drivers since the launch).

Skylake GT3e = 1.2 (before accounting for the improvements to the eDRAM which aren't yet documented).

GT4e = 1.5 * 1.2 = 1.8, and this is all at stock clocks. Overclock the 950 against that, and the 950 will lose until the eDRAM isn't enough cache to make up the overall bandwidth and memory buffer difference.

 

Excuse me, but whose math is flawed and why? Prove me wrong, because the math doesn't lie, so either the base numbers are wrong (which I've provided evidence that they aren't), or there's some secret sauce in the 950. I eagerly await this half-baked response of yours. It's not cockiness when you actually are the best in the room. Bring someone better.

Patrick, Patrick, Patrick. You are only making this worse on yourself. Why not just give up and admit it that a 21 year old with a sub 3.0 GPA in high school has successfully delivered a beatdown to your college educated ass? Let's go into details once again, because god knows you never have any sources to your claims, other than your predictions based on Intel's history.

 

Your entire logic is "GT2 Skylake is 20% faster than GT2 Broadwell, therefore GT3e Skylake has to be 20% faster than GT3e Broadwell!". Since when has this ever been true in the world of computing? Performance has never translated that way, ever. Seeing as Broadwell never even had a GT2 desktop CPU, how can you possibly think that GT2 Skylake desktop vs GT2 mobile Broadwell would make for a fair, 100% perfect translation? 

 

Again, this is just you making an assumption (pay attention to this word, it is very important) that it has to perform 20% faster, because it did with this very specific test. How about we dig deeper into Intel's history, since that is where all of your claims come from. The past, because god knows nothing exists in the present to validate your claims.

 

Intel HD 4600 (GT2 Haswell) http://www.notebookcheck.net/Intel-HD-Graphics-4600.86106.0.html

Intel HD 5600 (GT2 Broadwell) http://www.notebookcheck.net/Intel-HD-Graphics-5600.125595.0.html

 

Notice the average 10% difference in performance between these chips. Again, not a fair test, since the 4600 is a desktop CPU, vs the 5600, which is a mobile CPU, but still. We see a 10% difference in performance between these two GT2 chips, right? Now lets compare Haswell GT3 vs Broadwell GT3.

 

Intel HD 5000 (GT3 Haswell) http://www.notebookcheck.net/Intel-HD-Graphics-5000.91978.0.html

Intel HD 6000 (GT3 Broadwell) http://www.notebookcheck.net/Intel-HD-Graphics-6000.125588.0.html

 

Notice the amazing 70% difference in frame rate that the HD 6000 has over the HD 5000 in Battlefield Hardline. Notice the 48% difference in frame rate in Evolve that the HD 6000 has over the HD 5000. Then as we go further to games like Dragons Age Inquisition, the HD 6000 only has a 20% advantage in frame rate. The HD 6000 is 30% faster in CoD: AW. Games like F1 2014 only show a 8-9% difference between the two. Alien Isolation? Again, a 10% difference. 10% difference in the new Tomb Raider between the two, Crisis 3 is reporting 100% exact same frame rates in both low and medium settings (no idea why this is) and Metro LL has the same too, with both having 19FPS on low settings. 

 

Point is, looking at the history of the performance difference between these iGPU's means absolutely nothing. They jump all over the place in terms of performance, and you cannot make a prediction based on history. As you can see, depending on the scenario, a generation improvement can perform as much as 70% better, or make no difference at all. To pull 20% out of your ass and call it fact is just.. how do i put this...

 

Stupid. Yeah, your words work perfectly here. You would have to be stupid to believe that. BTW, since you suck at math, ill go ahead and do it for you. The HD 5000 has exactly 20% less EU's(missing 1 slice), but 10% higher boost, and 10% lower base clock speeds. You also have to factor in that the Broadwell CPU's were clocked to 2200mhz, and the Haswell chips were clocked in at 1500mhz. Then take into consideration the IPC improvement of the GPU's with the generation difference, and suddenly the difference between the performance of these iGPU's become less impressive. But hey, you are an IBM prodigy that has been offered hundreds of thousands of dollars to work for them, surely you've already taken this into consideration.

 

Also, i looked through that article you linked. Nowhere did a see a GTX 750 listed. Also, how can i quote numbers almost a year old when the Iris Pro 6200 has not been out for a year? I am good, but even i am not "travel back and time to fabricate benchmarks" good. 

 

So in conclusion.I am stupid. However, i also proved you wrong. Therefore, you lost a numbers game to a stupid person. Good luck getting that IBM job if you can't even out-math some like myself. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The R9 390/X trade blows with the 970/80. They do not outright beat at all. Further, Nvidia doesn't screw up new techs. Nvidia would not have released a 4GB flagship. It would have been 6, guaranteed. They'd have made a bigger interposer for a wider card and put 6 chips on it. Or they'd have waited for Hynix or Samsung to come out with HBM2 to get rid of the BS limitation.

well trades blows + has 8GB of vram compared to 3.5GB at full speed is pretty much beats. and there is nothing nvidia could have done if they yields arent there and no there would have been on HMC if it worked in time without worrying about yields and people would have praised them and if hbm didnt work out people would have been shitting on amd

Link to comment
Share on other sites

Link to post
Share on other sites

This isn't small enough for ya?
 

2000621186.jpeg

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

This isn't small enough for ya?

 

 

Eh, i think the EVGA hybrid looks sexier. Something about it being a blower and matching the Asus black/gold boards just turns me on.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Eh, i think the EVGA hybrid looks sexier. Something about it being a blower and matching the Asus black/gold boards just turns me on.

 

This one is way shorter and very cheap

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

This one is way shorter and very cheap

 
Really? It's 10.5 inches, just like the EVGA hybrid. As far as pricing goes, its 800 euros, or $907 USD. EVGA Hybrid is $750. The EVGA Hybrid is sold in the UK for 647 euros after VAT

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Update 2:

Bunch of performance claims from AMD, stating that the Nano has a 75 degree target operating temperature and is 42dBA loud. It's best to just wait a little longer and see if these claims are true.

Images are in the OP.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

To be fair, AMD makes $15 off of every console that's produced.

unfortunatly, that is but pennies...

Consider the RnD costs... so far there is sold what 30 million consoles with that AMD chipset in total? Then take 30m x15 dollars = 450m dollars.... Not a whole lot when you consider the cost of developing a new CPU alone can be over 300m dollars....

As of today, they havent made huge money on consoles, but given that consoles lasts for upwards of 6+ years, they still got time to make some more revenue. If they also get the next iteration of consoles, then man, they are really going to make money for sure...

Link to comment
Share on other sites

Link to post
Share on other sites

AMD has no advantage in custom design. IBM has been king of that realm for a decade. Now Intel stands to challenge it. AMD's not even on the custom radar. It only has consoles. Anything AMD can make, Intel can make. The only difference they have is Intel's vastly better CPU architecture and AMD's more advanced graphics architecture which has been eclipsed completely by Intel's offerings in the integrated segment. AMD's only win is in budget. The FX 8350 beats I5s and I7s below 6 cores for some CAD workloads and game streaming, but that's it, and it comes down to cost, because the 8350 used to be a $350 part. Now it's $150.

 

AMD has no advantage in hardware at all other than their theoretical FLOPs numbers which no computational workload designed by the best engineers on the planet can seem to get near, because either AMD's hardware or its drivers for OpenCL suck. In DX 12 AMD has no real advantage. Asynchronous Shaders will mean nothing for the first generation of DX 12 games, and Nvidia will still win when it actually invests in its DX 12 drivers, as it will to drop the hammer right before game launches around Christmas.

 

I make money for myself. Sue me. What I say hear has no impact on the stock market. I can speak the truth here and not give a damn. The truth is AMD has no advantage outside price. If HSA takes off, maybe it will, but as difficult as it is to program in it vs. OpenMP and OpenACC, the chances of that happening are about the same as all the gas molecules in a room randomly shifting to one half of it even for a brief moment.

Intel doesnt have shit on AMD in terms of graphical power.

Sure, Iris Pro (HD6200) , the best Intel can offer, and it is strong, when lookin in the APU segment. BUT, in the big scheme of things, it is nothing. Take Fiji, it completly wrecks everything intel is capable of making in the GPU section.

Remember, making a GPU, especially a strong one, is miles away from a CPU. It is not even in the same principles of design....

AMD is hitting a limit of physical size. Sure, going down to 16nm will help, but even then, the silicone manufacturers has machine limits. GloFo, TSMC, Samsung and even Intel cannot make dies over a certain physical area. This is just how it is.

However, atm, AMD has no socket able to handle a bigger die either, while intel does. So AMD has to compete on a higher node and lower die area then Intel can use... This should change with AM4 and 16nm FinFet. It should put them on par in terms of physical limitations, atleast until intel reaches 10nm.

There is also the poor DX11 performance to take into account. It would be interesting, although probably not going to affect things that much, to see how Kaveri or Carizzo would fare against Iris Pro under DX12.... Freeing up the CPU overhead and fully releasing HSA should give AMD more performance out of their chips, although, i do not think it is a whole lot.

Kaveri APUs are mostly bandwidth limited, so high speed low latency DDR4 or HBM would make a bigger performance impact then any form of node reduction of die area increase would.

Link to comment
Share on other sites

Link to post
Share on other sites

With the pieces that went into the Console processors I doubt the costs were that high, most of the integral parts were cobbled from other R&D streams.

Link to comment
Share on other sites

Link to post
Share on other sites

$6.50 if you actually watch the books. AMD way oversold their profit margins on that one, especially when not accounting for the actual surpluses of it all. It's a 15% margin on a low-volume product set.

What books have you been watching  :lol: 

 

Big pile of poop

Holy sheeet. 10/10 would laugh again.

I wonder why other companies also offer semi-custom since IBM is the king? How come there is a business in semi-custom outside of IBM?

You truly are making some claims now..

You sure a naive. 

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×