Jump to content

can Intel's Iris PRO inside Broadwell be a low-end discrete video card killer?

I run the latest driver on my laptop however it is an Ivy Bridle ULV i3 so there isn't much to work with there :P

 

However to my understanding Intel have never really optimised their driver for games beyond it not running like garbage on titles it can run.

Since Gen 7.5 and since Gen 8 came out, the newer gens can run pretty much anything if you're willing to sacrifice some (or a lot) of the settings.

 

It seems Skylake is getting gen 8.5 and not really a full new generation of graphics cores. That seems to come with Cannonlake instead (smart Intel maximizing the sales potential of its ticks by making the next generation of graphics exclusive to the ticks, for now anyway). When HBM/HMC moves onboard, we'll get to see what Intel and AMD can both do, though they really need quad channel memory or more to keep the iGPUs fed, 2GB of HBM/HMC or not.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yea, gta bench is a little weird, but i guess its all about cpu aswell.

 

Like going from in GTA 5 860k with 280 to intel, you get boost from 45 to 70+ fps.

 

Sauce:

Yes, its that bad.

Link to comment
Share on other sites

Link to post
Share on other sites

Yea, gta bench is a little weird, but i guess its all about cpu aswell.

 

Like going from in GTA 5 860k with 280 to intel, you get boost from 45 to 70+ fps.

-snip

Yes, its that bad.

If AMD can keep some APUs close to the $200 price range under Zen, it will have some real winners for compact, cheap PC builds. I don't doubt that. It will all come down to how much they can push HDL to make their process cheap enough to be both price-competitive and margin-competitive with Intel and stay afloat financially.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This is interesting but since these are relatively expensive chips I don't see how this really changes anything on the low end. And when it comes to laptops this is really the best place to have them but from my experience of owning a mobile I7 I get constant crashes of the drivers and have to force my Nvidia ones.

 

Either way the only real issue is these are desktop parts (correct me if I'm wrong). Not to mention Intel is just playing with 14nm while AMD is nearly double that size right now. I personally think this is a great product to show what is possible and what is most likely coming in 2016 from both camps.

 

I'm waiting for the great die shrink of 2016 from the graphics companies before deciding that low end to mid range graphics are dead. 

Link to comment
Share on other sites

Link to post
Share on other sites

They're not absurd, and the quoted numbers of Tom's are several areas of gameplay whereas Anand only uses the bundled benchmark which is always worse than in-game results. Tom's doesn't even include the previous Iris Pro results, some of which are 20-30% above the 7850K anyway as Sakkura quoted, and actually yes, after Intel updated its drivers, it is on that scale.

I cant believe how many people are buying into them numbers.

 

AMD’s fastest APU gets destroyed; Iris Pro 6200 is twice as fast, even with its slow connection to the shared DDR3-1600.

 

74940.png

 

So the i5-5675C will hold 122 FPS on average in GTA V regardless of the scenario? Because if you disagree (like Anandtech does) then you're just agreeing with me this whole time. I would not urge anyone to go out and buy one because they are under the impression that it performs that well in games. Numbers can be skewed and misleading like Tom's Hardware is known for. I can't believe you guys would egg people on by approving such results when it only performs 45% total of what Tom's Hardware advertised.

Link to comment
Share on other sites

Link to post
Share on other sites

I cant believe how many people are buying into them numbers.

 

 

 

 

So the i5-5675C will hold 122 FPS on average in GTA V regardless of the scenario? Because if you disagree (like Anandtech does) then you're just agreeing with me this whole time. I would not urge anyone to go out and buy one because they are under the impression that it performs that well in games. Numbers can be skewed and misleading like Tom's Hardware is known for. I can't believe you guys would egg people on by approving such results when it only performs 45% total of what Tom's Hardware advertised.

I'm not buying into them. I'm just saying your outright dismissal is flawed, because you're pulling the same blind faith in Anandtech which doesn't do in-game testing.

 

I'm also not saying that either. Every game has its highs and lows, and the in-game benchmark is always lower than the real thing anyway. Also, you have no proof if only performs 45% as well. You have one piece of anecdotal evidence that doesn't even tell an ounce of the real story.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not buying into them. I'm just saying your outright dismissal is flawed, because you're pulling the same blind faith in Anandtech which doesn't do in-game testing.

 

I'm also not saying that either. Every game has its highs and lows, and the in-game benchmark is always lower than the real thing anyway. Also, you have no proof if only performs 45% as well. You have one piece of anecdotal evidence that doesn't even tell an ounce of the real story.

They don't need to if the game has its own built in benchmark. Benchmarks are written purposely to match what performance impacts that you will experience in game at that graphic quality. Tom's Hardware could of been driving out in the middle of no where for all five of their tests with two of them staring off into space (skybox frames). They earned their bad reputation among the professional tech communities and they are continuing to drive the same garbage that earned them that reputation (LTT members seem to be the only ones oblivious to what's in front of them).

 

The in game benchmark is derived of scenes that the player will actually experience in the game (in game camera movement). So you can keep throwing a blind eye trying to boast the company that you root for although anyone with common sense can easily tell Tom's results are beyond skewed. As no one cares how many frames you get when you sit there and look at the sky. Wait until more benchmarks leak and I guarantee they will fall into place with Anandtech numbers. 122 FPS... shit I'd be lucky to get that with my HD 5870. Once I get my other hard drive installed I will benchmark it to prove it.

Link to comment
Share on other sites

Link to post
Share on other sites

They don't need to if the game has its own built in benchmark. Benchmarks are written purposely to match what performance impacts that you will experience in game at that graphic quality. Tom's Hardware could of been driving out in the middle of no where for all five of their tests with two of them staring off into space (skybox frames). They earned their bad reputation among the professional tech communities and they are continuing to drive the same garbage that earned them that reputation (LTT members seem to be the only ones oblivious to what's in front of them).

 

The in game benchmark is derived of scenes that the player will actually experience in the game (in game camera movement). So you can keep throwing a blind eye trying to boast the company that you root for although anyone with common sense can easily tell Tom's results are beyond skewed. As no one cares how many frames you get when you sit there and look at the sky. Wait until more benchmarks leak and I guarantee they will fall into place with Anandtech numbers. 122 FPS... shit I'd be lucky to get that with my HD 5870. Once I get my other hard drive installed I will benchmark it to prove it.

Except internal benchmarks don't live up to that at all.

 

You also have no proof Tom's hardware is doing that. Present proof and I will believe. That's how that works and you know it. I'm waiting for independent review like everyone else, and since neither provides video of their testing, we can conclude nothing. Hell, maybe Tom's overclocked the living Hell out of RAM and iGPU and got a golden chip. No one knows.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I hate looking at desktop CPUs where half the damm chip is dedicated to an IGP that will never be used.

With DX 12, that iGPU will come into use eventually to augment your dGPUs' work, especially since CPU-based Physx went open-source and it can now be re-purposed for OpenCL/OpenACC/OpenMP and become platform-agnostic. Intel, IBM, AMD, and ARM all see the future as heterogeneous integration. I'm sorry but betting against Intel in the long run alone is stupidity. Betting all 4 of them are full of crap when agreeing on the same thing? There's not a word strong enough to describe that level of lunacy.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Except internal benchmarks don't live up to that at all.

 

You also have no proof Tom's hardware is doing that. Present proof and I will believe. That's how that works and you know it. I'm waiting for independent review like everyone else, and since neither provides video of their testing, we can conclude nothing. Hell, maybe Tom's overclocked the living Hell out of RAM and iGPU and got a golden chip. No one knows.

Have you ever developed a game?

 

Now you're trying to reverse the roles. What I am asking for is proof from you that Tom's had a valid testing methodology behind their results. And why are the numbers they present completely out of wack among all of the hardware listed. Right now Anandtech numbers are about as real as it gets and we will see additional benchmarks (like I've stated above) that will confide with that. If they did overclock then they are advertising the product as if they took it out of the box and just ran with it. Are you starting to grasp why any of us hardcore techies don't label Tom's Hardware as a reliable source? It's always some half-assed vomit excuse of anything that they do.

Link to comment
Share on other sites

Link to post
Share on other sites

Have you ever developed a game?

 

Now you're trying to reverse the roles. What I am asking for is proof from you that Tom's had a valid testing methodology behind their results. And why are the numbers they present completely out of wack among all of the hardware listed. Right now Anandtech numbers are about as real as it gets and we will see additional benchmarks (like I've stated above) that will confide with that. If they did overclock then they are advertising the product as if they took it out of the box and just ran with it. Are you starting to grasp why any of us hardcore techies don't label Tom's Hardware as a reliable source? It's always some half-assed vomit excuse of anything that they do.

A couple 3D, but nothing AAA, and most of my work is admittedly for the Atari 2600 in raw assembly language (no cheating with the BASIC that came out for it later). But it still stands. The bundled benchmarks are usually several fps slower than what you'll experience at the same scene in-game, and it's usually the overhead of the scripted movement which does that and the additional utility built into the game (which isn't even accurate most of the time) to track the frame rate.

 

No, you made the claim that they were fabricating/outright false. I said you couldn't conclude that. The burden of proof is currently on you since you made the initial claim. I provided very plausible reasons for why you couldn't make your conclusion. And no, Anandtech has been very wrong if you compare their numbers against JayzTwoCents and other independents who actually show you in real time the numbers they get.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

whooaaaa now that is a nice igpu. with its own fast ddr memory

 

now intel all you need to do is put that nice iris pro IN A DAMN PENTIUM AND i3 ! 

 

that would be the perfect thing to do. bcause most i5/7 users will have a dedicated gpu. let ppl with a budget play their mobas with high fps on the igpu.

 

an iris pro pentium will sell like hotcakes

Link to comment
Share on other sites

Link to post
Share on other sites

I don't get that argument either. No one had experience until they do something. Intel is doing something. Intel has agreements with both sides for their GPU tech.

Intel didn't just pull this out of their asses with no idea on how to sustain it. Iris Pro 5200 was the beta test. 6200 is gen 1. It works. It only gets better.

Intel knows that APUs will be a future, theirs just happen to be...better? It's all fun to say Zen will crush Broadwell. It better, since we will be on 7xxx Iris Pro by this fall with Skylake. We have only current products to compare and currently, Intel is winning. Can't argue that.

 

I will be the one that don't get this either. Sure Intel doesn't have drivers on the same level as AMD or Nvidia, but they do have the money to throw on the problem. I mean they can afford to throw couple of billions on their mobile sector for R&D, and report that as a financial loss. Pretty sure even Nvidia, as a leader in dedicated GPU market, would have hard time doing this, not to mention AMD.

CPU: AMD Ryzen 7 3800X Motherboard: MSI B550 Tomahawk RAM: Kingston HyperX Predator RGB 32 GB (4x8GB) DDR4 GPU: EVGA RTX3090 FTW3 SSD: ADATA XPG SX8200 Pro 512 GB NVME | Samsung QVO 1TB SSD  HDD: Seagate Barracuda 4TB | Seagate Barracuda 8TB Case: Phanteks ECLIPSE P600S PSU: Corsair RM850x

 

 

 

 

I am a gamer, not because I don't have a life, but because I choose to have many.

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

I can't believe you guys would egg people on by approving such results when it only performs 45% total of what Tom's Hardware advertised.

 

I don't think I'm egging people on. As I've said before, Intel now has the strongest integrated graphics solution on the market, but they generally are not worth buying unless you don't have room for a dedicated graphics card in your system. That conclusion doesn't really change, whether you go by Anandtech's or Tom's numbers.

 

Link to comment
Share on other sites

Link to post
Share on other sites

whooaaaa now that is a nice igpu. with its own fast ddr memory

 

now intel all you need to do is put that nice iris pro IN A DAMN PENTIUM AND i3 ! 

 

that would be the perfect thing to do. bcause most i5/7 users will have a dedicated gpu. let ppl with a budget play their mobas with high fps on the igpu.

 

an iris pro pentium will sell like hotcakes

Intel needs AMD alive for a few more years until Nvidia's been kicked out of the HPC space. At that point it can just buy Nvidia outright, and then grinding AMD to dust leaves Intel uncontested. Before that point and it would give lead time to whoever who acquire an x86 license from AMD's dissolution.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

A couple 3D, but nothing AAA, and most of my work is admittedly for the Atari 2600 in raw assembly language (no cheating with the BASIC that came out for it later). But it still stands. The bundled benchmarks are usually several fps slower than what you'll experience at the same scene in-game, and it's usually the overhead of the scripted movement which does that and the additional utility built into the game (which isn't even accurate most of the time) to track the frame rate.

 

No, you made the claim that they were fabricating/outright false. I said you couldn't conclude that. The burden of proof is currently on you since you made the initial claim. I provided very plausible reasons for why you couldn't make your conclusion. And no, Anandtech has been very wrong if you compare their numbers against JayzTwoCents and other independents who actually show you in real time the numbers they get.

Then you would know there's no benefit of making the benchmark more impacting on hardware than it needs to be. Developers take into consideration any extra AI involvement and everything else in between when writing these benchmark routines. I have several games that actually give me higher frame rate results than what I actually see in game. I have yet to see a benchmark heavier on hardware than real time in game performance. There shouldn't be any overhead when it comes to calculating frame rate as that is extremely fast and done easily. I've done a lot of endscene hooking for aimbots that I've wrote that draws chams, esp, hitboxes, FPS, etc in game without any performance impact at all. So keeping track of frame rate and developers taking into account any additional AI overhead shouldn't have any negative effects on benchmarking (you can manipulate the scene even if there was such as cutting back pop count).

 

No, I made the claim that they are likely skewed in some way (which they are if you clearly look at them by themselves). There's no way you're going to average 122 FPS in GTA V when a GPU easily 5x as strong is not capable of it. There's a fine line between buying into these results (especially by an non-trusted source) and taking into account what Intel specified was their own performance margins along with what you can predict based on hardware. If you can find me someone running one of these chips and is getting 122 FPS average while racing through town in GTA V then you let me know. Wait until we get more numbers as I guarantee you no one else is going to establish a frame rate of 122 FPS on average in GTA V with this chip. There's not even a chance that the Iris Pro 6200 will out perform my HD 5870. That draws enough of a conclusion that Tom's numbers should be taken with a grain of salt.

 

Salt-Pile.jpg

 

I don't think I'm egging people on. As I've said before, Intel now has the strongest integrated graphics solution on the market, but they generally are not worth buying unless you don't have room for a dedicated graphics card in your system. That conclusion doesn't really change, whether you go by Anandtech's or Tom's numbers.

The debate has never been about performance comparison to existing products. It's about how does one source establish twice the frame rate than another credible source. Especially with that average being much higher than multiple times than much stronger hardware could ever achieve. Tom's is clearly thumping some kind of Intel blood to establish the results that they have when more reliable sources are debunking their flawed testing.

 

Intel needs AMD alive for a few more years until Nvidia's been kicked out of the HPC space. At that point it can just buy Nvidia outright, and then grinding AMD to dust leaves Intel uncontested. Before that point and it would give lead time to whoever who acquire an x86 license from AMD's dissolution.

Zen is slated to out perform Skylake by a decent margin and AMD is still far ahead in terms of graphics architecture. How exactly is Intel going to grind AMD's bones to make their bread if another K8 vs P68 era is upon us? Have movies thought you anything? You never count out the underdog.

Link to comment
Share on other sites

Link to post
Share on other sites

Then you would know there's no benefit of making the benchmark more impacting on hardware than it needs to be. Developers take into consideration any extra AI involvement and everything else in between when writing these benchmark routines. I have several games that actually give me higher frame rate results than what I actually see in game. I have yet to see a benchmark heavier on hardware than real time in game performance. There shouldn't be any overhead when it comes to calculating frame rate as that is extremely fast and done easily. I've done a lot of endscene hooking for aimbots that I've wrote that draws chams, esp, hitboxes, FPS, etc in game without any performance impact at all. So keeping track of frame rate and developers taking into account any additional AI overhead shouldn't have any negative effects on benchmarking (you can manipulate the scene even if there was such as cutting back pop count).

 

No, I made the claim that they are likely skewed in some way (which they are if you clearly look at them by themselves). There's no way you're going to average 122 FPS in GTA V when a GPU easily 5x as strong is not capable of it. There's a fine line between buying into these results (especially by an non-trusted source) and taking into account what Intel specified was their own performance margins along with what you can predict based on hardware. If you can find me someone running one of these chips and is getting 122 FPS average while racing through town in GTA V then you let me know. Wait until we get more numbers as I guarantee you no one else is going to establish a frame rate of 122 FPS on average in GTA V with this chip. There's not even a chance that the Iris Pro 6200 will out perform my HD 5870. That draws enough of a conclusion that Tom's numbers should be taken with a grain of salt.

 

-snip-

 
 

The debate has never been about performance comparison to existing products. It's about how does one source establish twice the frame rate than another credible source. Especially with that average being much higher than multiple times than much stronger hardware could ever achieve. Tom's is clearly thumping some kind of Intel blood to establish the results that they have when more reliable sources are debunking their flawed testing.

Just to name a few, GTA V, BF4, BF Hardline, and (I think, but double checking) Metro Last Light all have integrated benches that perform worse than in-game. It's actually about the last thing the studio cares about when trying to launch a game on time. Those finer nuances don't matter as much as "does it work?"

 

At this resolution the 750 would easily average that. And no, it's not totally clear. Disagreeing with another source means nothing when their testing methods are different and not directly comparable due to missing information. I withhold judgment until independent reviews come in with results I can verify with my own eyes.

 

And no one has debunked their "flawed" testing unless you'd like to provide a credible source to that effect. And while I'm here, did you get the message I sent you requesting a bit of assistance? I wouldn't be surprised if you tossed me on your ignore list (though I haven't put you on mine, or anyone on this forum), so I'm double-checking.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I will be the one that don't get this either. Sure Intel doesn't have drivers on the same level as AMD or Nvidia, but they do have the money to throw on the problem. I mean they can afford to throw couple of billions on their mobile sector for R&D, and report that as a financial loss. Pretty sure even Nvidia, as a leader in dedicated GPU market, would have hard time doing this, not to mention AMD.

 

Thats what pisses me off about the fanboys. 

 

Intel spends on mobile write-offs in quarter what most companies make an entire year. Intel does not play around in areas they want to develop in. Intel could have outstanding driver support in 6-12 months if they care, and with Iris Pro they care a lot. 

 

And unlike Nvidia or AMD, Intel has plenty of spare labour and money to spend on collaborating with developers to refine their drivers and ensure the best performance. Intel is what people at AMDs GPU division and Nvidia should be afraid of. They have the patents, they have the desire; they could make dGPUs to rival the two much less coming up with very competent if not class leading iGPUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just to name a few, GTA V, BF4, BF Hardline, and (I think, but double checking) Metro Last Light all have integrated benches that perform worse than in-game. It's actually about the last thing the studio cares about when trying to launch a game on time. Those finer nuances don't matter as much as "does it work?"

 

At this resolution the 750 would easily average that. And no, it's not totally clear. Disagreeing with another source means nothing when their testing methods are different and not directly comparable due to missing information. I withhold judgment until independent reviews come in with results I can verify with my own eyes.

 

And no one has debunked their "flawed" testing unless you'd like to provide a credible source to that effect. And while I'm here, did you get the message I sent you requesting a bit of assistance? I wouldn't be surprised if you tossed me on your ignore list (though I haven't put you on mine, or anyone on this forum), so I'm double-checking.

I can compile a list of games as well that benchmarking has given me higher frame rates than in game performance. I've posted some numbers on here a while back (in a thread about overclocking) that I could validate if I didn't wipe my machine to install Windows 10.

 

I don't agree with Tom's results because they are irrational. You would need in excess of 100 Gen8 EU's to achieve an average frame rate like that with real world testing methodology. I don't think anyone here (who knows better) secretly agree's to them establishing such numbers. We can dig up GTA V benchmarks where even the GTX 980 only hits 130 FPS at them settings. Like said I don't care about how they sat there juicing up frame rate with skybox frames. We want to know where the GPU stands in real world performance which is where Anandtech numbers look completely accurate (slightly faster than Kaveri).

 

Their testing is clearly flawed in order to achieve frame rates that high. Like said I don't care about skybox frames or standing in an empty desert if your frame rates are going to dip from 122 down to 40-50's once you hit areas with population and objects. The game doesn't take place in the sky or an empty wasteland which are the only two places I can see them even remotely establishing such frame rates (even then I question how they got them that high). I got the message but have been working on LTT lately (some big updates and changes) so I haven't addressed the message yet.

 

Thats what pisses me off about the fanboys. 

 

Intel spends on mobile write-offs in quarter what most companies make an entire year. Intel does not play around in areas they want to develop in. Intel could have outstanding driver support in 6-12 months if they care, and with Iris Pro they care a lot. 

 

And unlike Nvidia or AMD, Intel has plenty of spare labour and money to spend on collaborating with developers to refine their drivers and ensure the best performance. Intel is what people at AMDs GPU division and Nvidia should be afraid of. They have the patents, they have the desire; they could make dGPUs to rival the two much less coming up with very competent if not class leading iGPUs. 

It won't happen any time soon because a majority of games are written off. Developers stopped supporting them and won't go back to help Intel work on their crappy drivers (that they don't even develop themselves a lot of the time). There's simply way too many developers that Intel could never get into contact with. They would have to establish their optimizations off the game binary itself (quite a bit of which cannot be purchased anymore either). The only thing Intel can do is start now and work off today's games to establish their own optimized drivers. So maybe in 5-10 years they will have enough coverage in the gaming industry that they could throw out a discrete grade graphics card. Although I don't think Intel takes much interest in graphics for gaming. They are more oriented towards compute performance.

Link to comment
Share on other sites

Link to post
Share on other sites

Thats what pisses me off about the fanboys. 

 

Intel spends on mobile write-offs in quarter what most companies make an entire year. Intel does not play around in areas they want to develop in. Intel could have outstanding driver support in 6-12 months if they care, and with Iris Pro they care a lot. 

 

And unlike Nvidia or AMD, Intel has plenty of spare labour and money to spend on collaborating with developers to refine their drivers and ensure the best performance. Intel is what people at AMDs GPU division and Nvidia should be afraid of. They have the patents, they have the desire; they could make dGPUs to rival the two much less coming up with very competent if not class leading iGPUs. 

Intel lacks the patents on GPU IP, and if Nvidia pulls the plug, they have only access to what AMD allows them in their cross-license, which AMD could renegotiate if Zen goes south against the Skylake custom Xeons with Cannonlake graphics and the more tightly integrated Altera FPGAs (it just bought out altera and has begun shifting all production to older Intel fabs which are being retrofitted). Now, Intel does have the drive and the money. That's indisputable. For the time being, AMD is safe because Intel needs that shield from being forced to give an x86 license to someone with better financial standing (like Samsung, IBM, Apple, or Nvidia), but if Intel brings down Nvidia's HPC empire (and IBM's in the process since it's the biggest benefactor of Tesla accelerators), then Nvidia will no longer be in competition with Intel, leaving it open for buyout, which would render AMD powerless. At that point AMD could be crushed underfoot and Intel wouldn't have to care who gets an x86 license and AMD's IP, and it would likely poach Papermaster, Keller, and Koduri, leaving the new parent company stranded for years, even if it was one of the big 4 I mentioned.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Well, it looks like both that $1.5B Intel paid nvidia and the AMD cross license for various IP has paid off.  

 

It's a shame some people whinge about companies not pushing tech further, then when Intel does,  they whinge because it's not AMD doing the pushing.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Well, it looks like both that $1.5B Intel paid nvidia and the AMD cross license for various IP has paid off.  

 

It's a shame some people whinge about companies not pushing tech further, then when Intel does,  they whinge because it's not AMD doing the pushing.

Intel needs to do the pushing as there would be no point in AMD pushing forward if the competition is dead in the water. The same thing Intel has been doing since Sandy Bridge. Now that Intel turned a little heat on to AMD hopefully we will see much beefier iGPU's with Zen. As one thing AMD doesn't want to lose for an extended period of time is their known dominance in APUs which drives 70% of their total revenue. Then again AMD hasn't been big on APUs lately other than Carrizo as they wish to transition back to a high performance x86 supplier. Although I don't think they will let up on innovating their APUs to at least remain competitive.

Link to comment
Share on other sites

Link to post
Share on other sites

I can compile a list of games as well that benchmarking has given me higher frame rates than in game performance. I've posted some numbers on here a while back (in a thread about overclocking) that I could validate if I didn't wipe my machine to install Windows 10.

 

I don't agree with Tom's results because they are irrational. You would need in excess of 100 Gen8 EU's to achieve an average frame rate like that with real world testing methodology. I don't think anyone here (who knows better) secretly agree's to them establishing such numbers. We can dig up GTA V benchmarks where even the GTX 980 only hits 130 FPS at them settings. Like said I don't care about how they sat there juicing up frame rate with skybox frames. We want to know where the GPU stands in real world performance which is where Anandtech numbers look completely accurate (slightly faster than Kaveri).

 

Their testing is clearly flawed in order to achieve frame rates that high. Like said I don't care about skybox frames or standing in an empty desert if your frame rates are going to dip from 122 down to 40-50's once you hit areas with population and objects. The game doesn't take place in the sky or an empty wasteland which are the only two places I can see them even remotely establishing such frame rates (even then I question how they got them that high). I got the message but have been working on LTT lately (some big updates and changes) so I haven't addressed the message yet.

 
 

It won't happen any time soon because a majority of games are written off. Developers stopped supporting them and won't go back to help Intel work on their crappy drivers (that they don't even develop themselves a lot of the time). There's simply way too many developers that Intel could never get into contact with. They would have to establish their optimizations off the game binary itself (quite a bit of which cannot be purchased anymore either). The only thing Intel can do is start now and work off today's games to establish their own optimized drivers. So maybe in 5-10 years they will have enough coverage in the gaming industry that they could throw out a discrete grade graphics card. Although I don't think Intel takes much interest in graphics for gaming. They are more oriented towards compute performance.

Fair enough, though I'll wait for independent proof as I've said, and thank you. It's a rudimentary pair of programs, but I think my problem is I've stared at it too long and gotten locked into a pattern. I've run out of ideas.

 

And I partly disagree with your thoughts on drivers. Intel should already have access since devs SHOULD be coming to it for help with optimization of the CPU-based portions of the game. To get graphics access can't be much more difficult at that point.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Intel needs to do the pushing as there would be no point in AMD pushing forward if the competition is dead in the water. The same thing Intel has been doing since Sandy Bridge. Now that Intel turned a little heat on to AMD hopefully we will see much beefier iGPU's with Zen. As one thing AMD doesn't want to lose for an extended period of time is their known dominance in APUs which drives 70% of their total revenue. Then again AMD hasn't been big on APUs lately other than Carrizo as they wish to transition back to a high performance x86 supplier. Although I don't think they will let up on innovating their APUs to at least remain competitive.

AMD needs APUs and HSA to take off for them to gain competitive status in more lucrative markets, and even then, Intel could join the HSA foundation and put the fire back on AMD. 2016-2020 will be an interesting dance for certain.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×