Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

Déjà vu. It's almost like i said the exact same thing to @patrickjp93 several times.

 

http://linustechtips.com/main/topic/436531-update-3-sapphires-reference-costs-649-usd-amd-reveals-r9-nano-benchmarks-ahead-of-launch/?p=5875824

http://linustechtips.com/main/topic/433764-intel-plans-to-support-vesas-adaptive-sync/?p=5837723

 

Oh wait, it's because i have. Patrick has still yet to reply to my demands for evidence because he simply has none. He gets destroyed every single time he opens his mouth in my direction, because it's easy to see through pseudo-intellectuals.

 

He has officially become my favorite game to play these days.

 

Why people even care that AMD is leading in DX12 is beyond me. AMD has amazing hardware. Nvidia has amazing software. DX12 removed a lot of the driver overhead, and AMD's amazing hardware finally gets put to use. It makes Nvidia look bad in comparison, but that is because the drivers Nvidia used were already highly refined. Very little extra performance came of it. That being said, the results are still rather close to each other as far as performance goes. 

source: http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head/2

 

I personally love that DX12 is having a great impact on AMD cards. It means Nvidia has to step up their hardware game. Win-Win for consumers once again.

I gave you the evidence. You just don't accept that it is evidence. 

 

Every time? You're 0 and everything against me. There are exactly 6 people on this forum who have ever proven me wrong about anything, and of those, only one has done it twice.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Good to know.

I wouldn't say ripirieno to high-end GPUs yet, we can't base all our data on one game. Ark Survival will give us another crucial data point once DX12 is implemented into it next week.

As for the NVIDIA testing, I'd be pretty skeptical about the benchmarks if they directly come from NVIDIA. While it's true that 1440p or 4K would show minimal difference to its current pattern, it may just be that the benchmarks might be a bit too low for PR. I don't know. Don't take my word for it.

I would be as well, but really they are all cpu benchmarks (well and api) so I don't really care what the exact numbers are but rather the differences.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I gave you the evidence. You just don't accept that it is evidence. 

 

Every time? You're 0 and everything against me. There are exactly 6 people on this forum who have ever proven me wrong about anything, and of those, only one has done it twice.

So you did show me the receipt of you buying the 4790k when it first came out? 

 

http://linustechtips.com/main/topic/387149-intel-plans-job-cuts-across-the-company-internal-memo-says-and-reduce-rd-spending/page-4#entry5229898

 

Nope, because you were locked out of your account, and the physical copy was at some obscure location you did not have access to. Guess that's one point for me.

 

Also, what evidence did you give me? Because all i see are your "1.5 x 1.2" math equations that still factor into nothing. That 1.5 (50%) boost from the EU count is invalid because clock speeds have changed, meaning it does not scale linearly. The 1.2 (20%) number is a fallacy because you are comparing GT2 of mobile broadwell to GT2 of desktop skylake. Need i continue? Because this marks the second time i've proven you wrong.

 

The article you linked in that post also did not contain a single reference to the GTX 750, so i have no idea what you were trying to prove with that. Face it. You pulled numbers out of your ass, and you got called on it. Let me know when that "best there is" person shows up. I'd love to meet him.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1. no one will run the shittier (lower performing) of the two dx implementations just because they can.

2. You missed the point because if those benchmarks are true then Hawaii also outperforms Fiji (or ties it, which would indicate some other bottleneck here or it just doesn't make sense.), because fury x to 980ti was a straight draw.

3. This is the first one I have ever seen for this game where straight up without any exception dx12 makes nvidia worse.

Even people bitching about the 980 scaling showed most of the time 10-20% positive gains.

Just pointing these things out.

just going to reply to your 2nd point.

NO, 290X would not beat Fiji by a longshot. The reason is the computational power difference. I posted the FLOP performance of the 290X, 980Ti and Fury X earlier... but to repeat myself, and perhaps this will shed light on the matter...

 

 

R9 290X

Pixel Rate: 64.0 GPixel/s

Texture Rate: 176 GTexel/s

Floating-point performance: 5,632 GFLOPS

 

R9 Fury X

Pixel Rate: 67.2 GPixel/s

Texture Rate: 269 GTexel/s

Floating-point performance: 8,602 GFLOPS

 

GTX 980

Pixel Rate: 72.1 GPixel/s

Texture Rate: 144 GTexel/s

Floating-point performance: 4,616 GFLOPS

 

GTX 980Ti

Pixel Rate: 96.0 GPixel/s

Texture Rate: 176 GTexel/s

Floating-point performance: 5,632 GFLOPS

 

GTX TitanX

Pixel Rate: 96.0 GPixel/s

Texture Rate: 192 GTexel/s

Floating-point performance: 6,144 GFLOPS

 

 

We can clearly see that in DX12 the Fury X would utterly destroy the 980Ti and TitanX in sheer computational performance. However the FuryX is ROP limited, so its pixel fill rate would hold it back.

Link to comment
Share on other sites

Link to post
Share on other sites

Actually you do in long term. NVidia is playing a dangerous game by ripping off their customers with vendor lock in an conjunction with planned obsolescence. At some point these people will wake up and see NVidia is ripping them off, and will start to react. Maybe NVidia will see the writing on the wall and adapt to such a market situation before that happens, but that is only speculation of course. Consumers are far more emotional than business customers, so mistreating them will have long lasting effects. I know your stance on consumer is king, but it depends on the markets.

 

When people buy graphics cards, they usually buy them or many years (3-4 perhaps). Only hardware nerds like us in here wants to upgrade more often. An aggressive planned obsolescence of just one year or so, will have consequences for consumer trust. Such trust is difficult to attain and very difficult to regain once lost.

 

Arctic Islands began either earlier or about the same time. AMD has priority access to SK Hynix HBM, but that doesn't mean AMD cannot or won't use Samsung HBM, if SKH cannot provide the demand needed. SKH can produce HBM2 at 8hi as well, and considering HBM1 can be overclocked 100%, I doubt there will be any speed differences between the two manufacturers once products launch. It's not like Samsung makes faster RAM in any other market than SKH.

 

It was hard enough for NVidia it seems. I assume Pascal will have proper DX12 support, just as Arctic Island GPU's will have 12.1. So far we know nothing of Pascal or Arctic Island (Greenland) architecture, so cache systems, etc., are pure speculations.

 

Again a speculation. So far Maxwell isn't even fully DX 12.0 compliant, with lack of async shader support. That is very surprising. But in standard NVidia reactions, it's all about blaming everyone else.

Nvidia is playing a perfectly stable game with the same strategies IBM uses to this day to maintain its near-monopoly in mainframes and financial servers. It's not dangerous at all, because AMD isn't a threat, and frankly neither is Intel unless Intel finds a way to surpass IBM in scale-up architectures and beat Nvidia in GPGPU compute. And each year Intel fails to do that, both of its competitors are building up R&D with new features ready for deployment as needed. Nvidia has a 5-year lead on Intel in GPGPU compute that isn't shrinking. IBM has a similar length lead. Nvidia is perfectly safe for now.

 

No, because consumers are as stupid as cattle when they go shopping. You and I, and the LTT community in general, are exceptions, outliers, not the rule nor average. We can't turn that tide. It's pointless to think you can. Unless you want to slander Nvidia and end up in jail via a smear campaign, you're not going to have any effect. Exactly, their emotions tell them to go with what is trusted and bought the most, and cheap price tags generally mean cheap products. Most consumers are not discerning.

 

Yes, old folks and non-savvy parents buy them for 3-4 years, maybe more, but they buy something they're sure will suit their needs. Gamers tend to buy every two years, and they like to buy the best of the best with whatever wallet size they have. Nvidia has taken that crown for the past 4 years give or take. Consumers are stupid and will follow Nvidia clear to their financial blood letting rooms like they always do.

 

AMD has priority access over Nvidia to Hynix's HBM 2, but it also has an exclusivity deal in that same contract. AMD cannot ask Samsung for any chips until the successor to Arctic Islands. AMD is screwed on that front because, unlike Nvidia, it doesn't have the money to buy itself out of such a contract. Actually Samsung makes all of the fastest DDR4 right now, and it plans to release 4266MHz chips by December which puts it ahead of Hynix for the foreseeable future. In GDDR5 Samsung builds the highest density chips at 7GHz, whereas Hynix produces but has no buyers for 8GHz GDDR5 and won't have any since all of Pascal is going to be HBM 2.

 

It's not a speculation when you actually have insider knowledge (perks of actually being an investor). I'm not surprised at all. Nvidia built the minimum needed to kick AMD's tail for the last years of DX 11. Now it's going to build the minimum required to beat AMD for the first 1.5 years of DX 12 and will include 12.1 and try to get its developers to use 12.1 features to further snub and further damage AMD's hold on the market as is its right to protect and further its own sales. AMD is naive and will fall short once again by chasing features no API will use for 5 years instead of building enough of a card to compete with Nvidia on equal or better footing, something Koduri could do if he wasn't such a prideful snob.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You're operating under the delusion the bus wasn't redesigned long before that. DX 12 was fully specced and ratified for 6 months before risk production of Pascal began. The revision already happened.

 

DX 12 removes the need for complex drivers. AMD, Nvidia, and Intel have all said this.

 

Read my first point. Development and deployment of products happens in staggered parallel much like a CPU's superscalar operations.

Funny you should say that. Because if it was revised 6 months ago, why havent Nvidia leaked any news of Async Shaders at E3 or Gamescom and it doesnt seem like we will be hit with any massive news during PAX either....

 

Your speculations has NO confirmation, unless this was written in some paper article in some magazine that was by every account NOT uploaded or talked about on the internet....

 

again, please, PROVE that Nvidia did add Async shaders. Because if you have no articles or mentions about it, it is all BULLSHIT that you have made up.

Link to comment
Share on other sites

Link to post
Share on other sites

Pascal's already past risk production and is in volume production at TSMC right now. This was announced 2 months ago.

The "beating" is within margin of error, and no one should be surprised given both the 290/390X and 980TI both have 2816 SPs. When you get all the overhead out of the way, DUH! Then AMD runs into its ROP imbalance issues and Nvidia runs into its lack of Async Shaders for Maxwell. The problem is Koduri hasn't gotten his ROP count right in 5 generations, and I have 0 faith in him doing correctly for Arctic Islands. The fact is Koduri hasn't shown he can learn from his mistakes, as made clear by Fiji's terrible, imbalanced design that doesn't outperform the 390X in AOTS or any other DX 12 title right now. Nvidia consistently proves it will deliver the absolute best of what is desired for a market at a given time. Maxwell was designed with DX 12 in mind, but then 20nm never happened, so Nvidia scaled back and delivered the best for DX 11. Now all it has to do is put everything back and add a couple bells and whistles and deploy it for DX 12. AMD won't win. It's not a guess.

No one should be surprised that the 290X beats the 980Ti?

Let me spell it out: The two year old 290X is, at the very least, closely contesting against the current flagship of NVIDIA 980Ti. If that's not surprising, I don't know what is.

The remainder of your argument is all assumptions, saying, 'because of AMD's 'poor' performance in the past, and NVIDIA's superior technology with all the bells and whistles in all the cards they've done so far, NVIDIA WILL WIN'. The only way you could show me that NVIDIA will 'destroy' AMD is by physically showing me benchmarks.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

mmmmhmmm? And?

Annnndddd not too many programs or games seem to utilise said power. Mining seems to be an exception.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

So you did show me the receipt of you buying the 4790k when it first came out? 

 

http://linustechtips.com/main/topic/387149-intel-plans-job-cuts-across-the-company-internal-memo-says-and-reduce-rd-spending/page-4#entry5229898

 

Nope, because you were locked out of your account, and the physical copy was at some obscure location you did not have access to. Guess that's one point for me.

 

Also, what evidence did you give me? Because all i see are your "1.5 x 1.2" math equations that still factor into nothing. That 1.5 (50%) boost from the EU count is invalid because clock speeds have changed, meaning it does not scale linearly. The 1.2 (20%) number is a fallacy because you are comparing GT2 of mobile broadwell to GT2 of desktop skylake. Need i continue? Because this marks the second time i've proven you wrong.

 

The article you linked in that post also did not contain a single reference to the GTX 750, so i have no idea what you were trying to prove with that. Face it. You pulled numbers out of your ass, and you got called on it. Let me know when that "best there is" person shows up. I'd love to meet him.

No but there was no point in doing that. You can go see my build logs for Black Beast 2 that I made for my Dad this past January.

 

That math is perfect balid and strong and is based on known information about Skylake benchmarks over the previous generation. Since graphics is an embarrassingly parallel problem, performance will scale almost linearly with the number of EUs since Intel makes each EU its own balanced collection of TMUs, ROPs, and SPs. Strong Induction is a valid proof technique both for mathematics and informal logic (debates of reason). It's also not remotely a fallacy when both can perform at max boost with no throttling (thank you Macbook Air design for references). The clock speeds will go up, meaning I'm being conservative and it only makes my point stronger for you to argue it's a fallacy.

 

I pulled the scaling numbers between Broadwell and Skylake. You can independently find the 750 numbers for yourself. I figured I didn't have to quote them since you claimed to know them, or do I now have to do all your work for you and make both sides of the argument? Seriously kiddo, go back to the minor leagues where you belong. This is chess, and you were checkmated 6 moves ago.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No one should be surprised that the 290X beats the 980Ti?

Let me spell it out: The two year old 290X is, at the very least, closely contesting against the current flagship of NVIDIA 980Ti. If that's not surprising, I don't know what is.

The remainder of your argument is all assumptions, saying, 'because of AMD's 'poor' performance in the past, and NVIDIA's superior technology with all the bells and whistles in all the cards they've done so far, NVIDIA WILL WIN'. The only way you could show me that NVIDIA will 'destroy' AMD is by physically showing me benchmarks.

No, no one should be surprised when we know DX 12 is more compute-intense and makes less back and forth communication with the CPU because the routines don't require it, meaning more of the metadata processing can be done on-card and remove a lot of latency between operations. Furthermore, the asynchronous shading removes the idle points when all shaders couldn't be saturated under DX 11 in its synchronous model, so that makes up for the clock speed difference. And the bandwidth is nearly the same, and memory overclocks are more significant for the 290/390X due to its 512-bit bus. It shouldn't be surprising at all. It's a direct result of pure logic based on the design.

 

It's also based on what I know about Koduri as an IC designer, and he refuses to be told "no." He overbuilds and underdelivers for the time at hand, and that continues to cost AMD as it cost ATI back in the day. Nvidia will destroy AMD because it can last through the fight. AMD can't. When the R&D bills come in for Zen and AI, if the sales are less than spectacular, the investors are going to jump ship, because it's the last chance AMD has to make serious money ahead of the 2019 deadline to dig itself out its grave. Nvidia will win in the short term because it designs for the here and now and has huge brand recognition. Beyond that SLI is going out the window in favor of a new scaling connector called NVLink (there are 2 different NVLinks, one to replace PCIe as the interface for PowerPC and other more direct integration architectures/fabrics, and one to cover inter-GPU communication which will replace SLI fingers). With SLI scaling now equaling AMD's scaling, sorry but where does AMD have left to run? It had HBM 2 early, but now Samsung's taken out that advantage too, and it's offering better HBM 2 SKUs in the same 4-stack configuration.

 

It's not an assumption anymore. It's an inevitability, just like the U.S. losing the fabric business to china, something everyone said Buffet was an idiot for claiming, and yet, he was the one laughing all the way to the bank. You may say wait for benchmarks, but I have no skin in the purchasing game, and I'm calling it. Feel free to come back and laugh at me if I'm wrong. The odds say you won't. 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Funny you should say that. Because if it was revised 6 months ago, why havent Nvidia leaked any news of Async Shaders at E3 or Gamescom and it doesnt seem like we will be hit with any massive news during PAX either....

 

Your speculations has NO confirmation, unless this was written in some paper article in some magazine that was by every account NOT uploaded or talked about on the internet....

 

again, please, PROVE that Nvidia did add Async shaders. Because if you have no articles or mentions about it, it is all BULLSHIT that you have made up.

You don't have leak what's obvious. Furthermore, there are plenty of conventions between now and Q2 2016, and Nvidia's been pulling from Steve Jobs' old playbook with underselling and overdelivering since Kepler. There's also the sticky bit Microsoft confirmed Pascal to be fully DX 12 Tier 3 and DX 12.1 compliant a bit over 2 weeks ago.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No but there was no point in doing that. You can go see my build logs for Black Beast 2 that I made for my Dad this past January.

 

That math is perfect balid and strong and is based on known information about Skylake benchmarks over the previous generation. Since graphics is an embarrassingly parallel problem, performance will scale almost linearly with the number of EUs since Intel makes each EU its own balanced collection of TMUs, ROPs, and SPs. Strong Induction is a valid proof technique both for mathematics and informal logic (debates of reason). It's also not remotely a fallacy when both can perform at max boost with no throttling (thank you Macbook Air design for references). The clock speeds will go up, meaning I'm being conservative and it only makes my point stronger for you to argue it's a fallacy.

 

I pulled the scaling numbers between Broadwell and Skylake. You can independently find the 750 numbers for yourself. I figured I didn't have to quote them since you claimed to know them, or do I now have to do all your work for you and make both sides of the argument? Seriously kiddo, go back to the minor leagues where you belong. This is chess, and you were checkmated 6 moves ago.

Okay. I checked the build log. Nowhere in that log did you mention buying the 4790k for $320 on amazon. Also, the log was dated December 2014. The 4790k launched in Q2 of 2014, not Q4. You also said you got it for your friend, not your dad. Therefore, your build log has nothing to do with you claiming to purchase the CPU $30 below MSRP on the day it came out, when it shows no price, and is several months after the fact. I might not be good at chess, but i am pretty sure you suck at Russian Roulette, because you just shot yourself in the foot again.

 

Again, you keep bringing up the comparison of Skylake vs Broadwell, but you are comparing two entirely different CPU's. 2.7ghz (3.4ghz boost) vs 4.0ghz (4.2ghz boost) and trying to pretend the CPU's themselves have no impact on the frame rate in the tests. Come on, you should already know this, lol.

 

Also, you yourself already gave in to my 750 numbers. You said the 6200 was almost as fast as the 750, i was the one that gave you the 15-20% number. Then you ran with it, putting the 6200 at 1.0, and 750 at 1.2. You recently linked an article, and said it mentioned the GTX 750. It did not. It mentioned a $70 Radeon card, but nothing else. It is almost as if you are purposely making this easy on me.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

just going to reply to your 2nd point.

NO, 290X would not beat Fiji by a longshot. The reason is the computational power difference. I posted the FLOP performance of the 290X, 980Ti and Fury X earlier... but to repeat myself, and perhaps this will shed light on the matter...

We can clearly see that in DX12 the Fury X would utterly destroy the 980Ti and TitanX in sheer computational performance. However the FuryX is ROP limited, so its pixel fill rate would hold it back.

Sigh. We all know sheer compute amd>nvidia, however even with dx12 compute isn't everything.

We have benchmarks for aots that has the fury x and 980ti completely deadlocked.

I believe the extreme tech one does fury x vs 980ti, but I'm sure by now there are many others around.

So I mean really. For the 290x to beat the 980ti it has to also beat the fury x.

Compute isn't everything obviously.

Anyways we had moved past that point.

You can say that the fury x is held back by pixel fill rate, but both the fury x and the 290x share the same base fill rate with the 290x having much lower bilinear filtering texel rates.

So unless something else is holding the fury x back relative to the 290x.... It still doesn't make sense.

Again we had finished this conversation already and you jumping in with an off the cuff remark adds nothing constructive.

Feel free to discuss with Patrick. I'm rather done with this.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Okay. I checked the build log. Nowhere in that log did you mention buying the 4790k for $320 on amazon. Also, the log was dated December 2014. The 4790k launched in Q2 of 2014, not Q4. You also said you got it for your friend, not your dad. Therefore, your build log has nothing to do with you claiming to purchase the CPU $30 below MSRP on the day it came out, when it shows no price, and is several months after the fact. I might not be good at chess, but i am pretty sure you suck at Russian Roulette, because you just shot yourself in the foot again.

 

Again, you keep bringing up the comparison of Skylake vs Broadwell, but you are comparing two entirely different CPU's. 2.7ghz (3.4ghz boost) vs 4.0ghz (4.2ghz boost) and trying to pretend the CPU's themselves have no impact on the frame rate in the tests. Come on, you should already know this, lol.

 

Also, you yourself already gave in to my 750 numbers. You said the 6200 was almost as fast as the 750, i was the one that gave you the 15-20% number. Then you ran with it, putting the 6200 at 1.0, and 750 at 1.2. You recently linked an article, and said it mentioned the GTX 750. It did not. It mentioned a $70 Radeon card, but nothing else. It is almost as if you are purposely making this easy on me.

Yeah, the build date doesn't always coincide with purchase dates. I've been purchasing the Skylake parts for my mother's new build piecemeal over the past month waiting for the Z170 boards to dip a bit in price and wait for better 6700K stock so I can get a cheaper price. You're making assumptions with no basis.

 

Unless I was inebriated when I made that post, I believe the only 4790K I've ever bought was for my Dad, so I will correct the record on that if need be.

 

I've never lost Russian Roulette. It's always nice when you know how to jam the gun in-between chambers.

 

Perfectly GPU-bound scenarios at 4K, duh. Firestrike Extreme.

 

I make it easy on no one. I only provided the 6200 numbers as late as I knew how to find them. We could go into the 3DMark databases, but you don't seem to be the kind who knows how to use them properly or quote stock speed benchmarks instead of cherry picking. If you'd like to prove you're not a child, feel free. It'll be fun watching you death spiral.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Sigh. We all know sheer compute amd>nvidia, however even with dx12 compute isn't everything.

We have benchmarks for aots that has the fury x and 980ti completely deadlocked.

I believe the extreme tech one does fury x vs 980ti, but I'm sure by now there are many others around.

So I mean really. For the 290x to beat the 980ti it has to also beat the fury x.

Compute isn't everything obviously.

Anyways we had moved past that point.

You can say that the fury x is held back by pixel fill rate, but both the fury x and the 290x share the same base fill rate with the 290x having much lower bilinear filtering texel rates.

So unless something else is holding the fury x back relative to the 290x.... It still doesn't make sense.

Again we had finished this conversation already and you jumping in with an off the cuff remark adds nothing constructive.

Feel free to discuss with Patrick. I'm rather done with this.

No, only theoretically does AMD win. Why do you think in enterprise compute AMD is dead last in accelerator sales and marketshare? CUDA on its own has nothing to do with it. Libraries are easy to rewrite when the algorithms are already made. Intel's been flipping CUDA libraries to OpenMP for years now and helping data scientists transfer from Teslas to Xeon Phis. Transferring CUDA code to OpenCL 2 isn't too bad. OpenCL 1.x is still terrible to program in, but that's still not the main issue. AMD can't match Nvidia's performance in compute at all. The thing about mining is it's DP-intense, so consumer cards for Nvidia don't cut it because they have 1/32 DP speed vs SP, and the Quadros aren't cost-effective. AMD has never in real life beaten Nvidia in compute apples to apples. Thanks to its much smaller data buffer, the Fury X loses hands down to the 980TI in Compubench and Linpack, a collection of OpenCL and GPGPU datacrunching benchmarks.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, the build date doesn't always coincide with purchase dates. I've been purchasing the Skylake parts for my mother's new build piecemeal over the past month waiting for the Z170 boards to dip a bit in price and wait for better 6700K stock so I can get a cheaper price. You're making assumptions with no basis.

 

Unless I was inebriated when I made that post, I believe the only 4790K I've ever bought was for my Dad, so I will correct the record on that if need be.

 

I've never lost Russian Roulette. It's always nice when you know how to jam the gun in-between chambers.

 

Perfectly GPU-bound scenarios at 4K, duh. Firestrike Extreme.

 

I make it easy on no one. I only provided the 6200 numbers as late as I knew how to find them. We could go into the 3DMark databases, but you don't seem to be the kind who knows how to use them properly or quote stock speed benchmarks instead of cherry picking. If you'd like to prove you're not a child, feel free. It'll be fun watching you death spiral.

You said you didn't need to show me a receipt to prove you paid $320 for it, and that the thread you mentioned would show it. It did not. I don't believe you paid $320 for that chip. I did not believe it back then, i still do not believe you. You won't prove it because you cannot. That is why i won that argument back then.

 

The reason i am winning now is because you still cannot provide any links to your claims. Also, looking at the 3dmark database, none of the 28 results for the 6200 match the numbers you throw out. Now your one tangible piece of evidence is against you. Anything else you would like to add?

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You said you didn't need to show me a receipt to prove you paid $320 for it, and that the thread you mentioned would show it. It did not. I don't believe you paid $320 for that chip. I did not believe it back then, i still do not believe you. You won't prove it because you cannot. That is why i won that argument back then.

 

The reason i am winning now is because you still cannot provide any links to your claims. Also, looking at the 3dmark database, none of the 28 results for the 6200 match the numbers you throw out. Now your one tangible piece of evidence is against you. Anything else you would like to add?

Really? http://www.3dmark.com/search?_ga=1.159418683.786654399.1440966265#/?mode=advanced&url=/proxycon/ajax/search/gpu/fs/R/1039/416?minScore=0&gpuName=Intel Iris Pro Graphics 6200

 

http://www.3dmark.com/search?_ga=1.211331347.786654399.1440966265#/?mode=basic&url=/proxycon/ajax/search/gpuname/fs/R/NVIDIA%20GeForce%20GTX%20750&gpuName=NVIDIA GeForce GTX 750

 

discounting the cherry-picked obnoxiously overclocked 750s, I believe the average delta is 19.4% with variance of 2.6%. Check mate was decided 8 moves ago. Why are you still trying?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You said you didn't need to show me a receipt to prove you paid $320 for it, and that the thread you mentioned would show it. It did not. I don't believe you paid $320 for that chip. I did not believe it back then, i still do not believe you. You won't prove it because you cannot. That is why i won that argument back then.

 

The reason i am winning now is because you still cannot provide any links to your claims. Also, looking at the 3dmark database, none of the 28 results for the 6200 match the numbers you throw out. Now your one tangible piece of evidence is against you. Anything else you would like to add?

Kaveri vs Broadwell

 

http://www.3dmark.com/fs/2385195

http://www.3dmark.com/fs/5492117

 

I cannot explain why broadwell scores so low... maybe a driver issue? As it should score much higher.... but on the flip side, i couldnt find a single intel HD product that beat that Kaveri score...

Link to comment
Share on other sites

Link to post
Share on other sites

Link to comment
Share on other sites

Link to post
Share on other sites

Was just gonna say, the Broadwell link he gave was broken. Thanks for that.

 

@patrickjp93 http://www.3dmark.com/fs/2495489

 

Clocked in at 1040mhz, a 20mhz overclock. I do not see it being anywhere near as low as the Iris Pro 6200 scores. Care to share exactly which two results you are comparing? 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

ctrl+c then ctrl+v

That's exactly what I did, and it separated the spaced parts from the link when I posted.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Was just gonna say, the Broadwell link he gave was broken. Thanks for that.

 

@patrickjp93 http://www.3dmark.com/fs/2495489

 

Clocked in at 1040mhz, a 20mhz overclock. I do not see it being anywhere near as low as the Iris Pro 6200 scores. Care to share exactly which two results you are comparing? 

@patrickjp93 and Magetank Let us contiue this in private shall we?

 

We are taking up a LOT of "thread space"... and we have thoroghly derailed this topic...

Link to comment
Share on other sites

Link to post
Share on other sites

@patrickjp93 and Magetank Let us contiue this in private shall we?

 

We are taking up a LOT of "thread space"... and we have thoroghly derailed this topic...

How do we take it privately? Is there a group chat feature on this forum?

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×