Jump to content

Intel Says Iris and Iris Pro Graphics Can Outperform 80% of Discrete GPUs – Casual and Mainstream Users Don’t Need dGPUs

Mr_Troll

Cuz of DDR4

Read the test machine data and you'll find out that has nothing to do with it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

xaekz.jpg

i'd say you're talking crazy

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

1.15TFLOPS???

THATS WORSE THEN THE FAGBOX-ONE.... which has 1.31TFLOPS. the PS4 sits at 1.84TFLOPS....

if you cannot even beat CONSOLES. you should just fuck the hell off with your "can beat 80% of dGPUs".

yes, it can beat 80% of what was around FIVE YEARS AGO....

@Partickjp93

you said GT4e would beat or match a 750Ti, right???

you claimed this. RIGHT!!!

because at 1.15TFLOPS well....

AMD Radeon R7 360 - 1.5TFLOPS

http://www.techpowerup.com/gpudb/2661/radeon-r9-360-oem.html

Nvidia GTX 750Ti - 1.3TFLOPS

http://www.techpowerup.com/gpudb/2548/geforce-gtx-750-ti.html

these are reference models. Custom PCB versions WILL be faster then this. notably so.

Now lets see how the new 14/16nM GPUs do... if we are to go by previous gains from node shrinks, we should see around 20-30% raw performance increase.

Nice try intel. you still suck.

Xbone and Pisspoor......and think of it this way-at least the CPU side of things doesn't blow.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Xbone and Pisspoor......and think of it this way-at least the CPU side of things doesn't blow.

honestly man... CPU side doesnt matter.

at 0.737TFLOPS, the iGPU is still not strong enough to even corner a Athlon 860k

 

http://www.techpowerup.com/gpudb/2532/radeon-r7-graphics.html

 

to give you an idea how much GPU power is needed to bottleneck a 860k

http://www.techpowerup.com/gpudb/2734/radeon-r9-380.html

AMD R9 380, 3.47TFLOPS

that is the limit of the 860k...

 

how much power is needed to limit the broadwell 5675c?

correct answer: 2x Fury in CF (atleast thats when the 4690k shows bottlenecks)

http://www.techpowerup.com/gpudb/2736/radeon-r9-fury.html

so that means 14.2TFLOPS

 

 

meaning, the power of the iGPU in intels and even AMDs APUs arent strong enough to overpower even a shitty bulldozer derived "quad" core....

Link to comment
Share on other sites

Link to post
Share on other sites

@Prysin Flops isn't everything, and DDR4 has nothing to do with Intel's Skylake graphics being better. You can get 3000+ MHz kits of DDR3 too. Intel's architecture is what's improving, along with their drivers which, while they don't fix broken games, are very well optimized for a UMA system and do support the full API of both DX 11 and OpenGL 4.x as well as OpenCL 2.x as of Broadwell.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

@Prysin Flops isn't everything, and DDR4 has nothing to do with Intel's Skylake graphics being better. You can get 3000+ MHz kits of DDR3 too. Intel's architecture is what's improving, along with their drivers which, while they don't fix broken games, are very well optimized for a UMA system and do support the full API of both DX 11 and OpenGL 4.x as well as OpenCL 2.x as of Broadwell.

Correct. Flops are not indicative of gaming performance. If Flops were all we needed to compare which cards perform better in games, everyone would be using Kepler Tesla's. 

 

I was hoping to avoid this thread, because normally when intel's iGPU's get brought up, i get yelled at, but i actually agree with your statements here. Besides, It's not like that 80% number is far fetched. There are plenty of dGPU's that are not meant to be used for gaming. The entire "GT" series comes to mind. Not to mention MXM modules on laptops are technically dGPU's too, and GT4e should be competing with the GTX 960m, and everything below it. 

 

To reiterate my previous feelings on Skylake GT4e, i feel it will be stronger than a GTX 750, but weaker than a GTX 750 Ti. You felt it would match or surpass a GTX 950. Regardless of who is right, that kind of performance in an iGPU is going to be ground breaking, and easily worth $100-$150 premium in my eyes on a CPU. Think about it. You could make extremely small form factor builds (Sub 2L) and power them with laptop bricks. Or, just imagine laptops themselves using them. Ultra thin netbooks with gaming class iGPU's. There is not a doubt in my mind that if this thing delivers GTX 750 performance, that it could max a few titles at 720p. If it matches the GTX 950, then its even better. It also means Nvidia and AMD will have to start releasing better entry level cards, so budget gamers still win in absolutely every regard.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

Intel Says Iris and Iris Pro Graphics Can Outperform 80% of Discrete GPUs – Casual and Mainstream Users Don’t Need dGPUs

Totally agree - it's because 80% of discrete GPUs are probably like GeForce 210 in random prebuilts.  Not to mention they are actually improving really fast so they would actually tie with some "good" low end cards.  Obviously they're not saying they can beat a 980 ti, but for lesser setups, yeah, it's getting good :)

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

The figure is more than likely correct too. Its easy to forget as an enthusiast that a 970 or a 380 is common but most people who game play on 1080p monitors and mostly laptops and at that resolution at lower quality these current igpus will run it smoothly without the need of assisted dgpu.

I mean games like cs:go run on a potato with decent fps.

Link to comment
Share on other sites

Link to post
Share on other sites

1.15TFLOPS???

THATS WORSE THEN THE FAGBOX-ONE.... which has 1.31TFLOPS. the PS4 sits at 1.84TFLOPS....

if you cannot even beat CONSOLES. you should just fuck the hell off with your "can beat 80% of dGPUs".

yes, it can beat 80% of what was around FIVE YEARS AGO....

@Partickjp93

you said GT4e would beat or match a 750Ti, right???

you claimed this. RIGHT!!!

because at 1.15TFLOPS well....

AMD Radeon R7 360 - 1.5TFLOPS

http://www.techpowerup.com/gpudb/2661/radeon-r9-360-oem.html

Nvidia GTX 750Ti - 1.3TFLOPS

http://www.techpowerup.com/gpudb/2548/geforce-gtx-750-ti.html

these are reference models. Custom PCB versions WILL be faster then this. notably so.

Now lets see how the new 14/16nM GPUs do... if we are to go by previous gains from node shrinks, we should see around 20-30% raw performance increase.

Nice try intel. you still suck.

5 years ago? we had GTX 580 etc back then, these cards can game at 4K with lowered details, Iris cannot LOL, and Fermi is far faster in every direction, TFLOPS mean jack shit when crunching graphics.

CONSOLE KILLER: Pentium III 700mhz . 512MB RAM . 3DFX VOODOO 3 SLi

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Correct. Flops are not indicative of gaming performance. If Flops were all we needed to compare which cards perform better in games, everyone would be using Kepler Tesla's. 

 

I was hoping to avoid this thread, because normally when intel's iGPU's get brought up, i get yelled at, but i actually agree with your statements here. Besides, It's not like that 80% number is far fetched. There are plenty of dGPU's that are not meant to be used for gaming. The entire "GT" series comes to mind. Not to mention MXM modules on laptops are technically dGPU's too, and GT4e should be competing with the GTX 960m, and everything below it. 

 

To reiterate my previous feelings on Skylake GT4e, i feel it will be stronger than a GTX 750, but weaker than a GTX 750 Ti. You felt it would match or surpass a GTX 950. Regardless of who is right, that kind of performance in an iGPU is going to be ground breaking, and easily worth $100-$150 premium in my eyes on a CPU. Think about it. You could make extremely small form factor builds (Sub 2L) and power them with laptop bricks. Or, just imagine laptops themselves using them. Ultra thin netbooks with gaming class iGPU's. There is not a doubt in my mind that if this thing delivers GTX 750 performance, that it could max a few titles at 720p. If it matches the GTX 950, then its even better. It also means Nvidia and AMD will have to start releasing better entry level cards, so budget gamers still win in absolutely every regard.

the thing is. though, one need to not only look at architecture. but also process node.

Kaveri. it performs slightly better then the HD530. However the crucial thing to notice is that in BOTH Intel and AMDs cases, these iGPUs are taking up nearly 40-50% of the die area.

These are facts. Known, proven facts.

So when Intel is at 14nm and AMD is at 28nm. Then you gotta ask yourself. how many more transistors do intel need to match AMD?

Because lets face it, they can cram A LOT more transistors in there given that they had 14nm FF+....

The real kicker is not GT4e, which will match or beat a custom 750.

The real kicker is what happens once AMD and Nvidia can cram MUCH more transistors into their chips without suffering insane leakage issues.

This is why Carrizo is a laptop part. Because AMD simply couldnt get a good sample to run at higher then 35w on the desktop without having so much leakage, that it would effectively have to be downclocked and volted to the point of being worse then Kaveri.

Link to comment
Share on other sites

Link to post
Share on other sites

The real kicker is what happens once AMD and Nvidia can cram MUCH more transistors into their chips without suffering insane leakage issues.

 It means pretty much exactly what i said in my post. 

 

 

Regardless of who is right, that kind of performance in an iGPU is going to be ground breaking, and easily worth $100-$150 premium in my eyes on a CPU. Think about it. You could make extremely small form factor builds (Sub 2L) and power them with laptop bricks. Or, just imagine laptops themselves using them. Ultra thin netbooks with gaming class iGPU's. There is not a doubt in my mind that if this thing delivers GTX 750 performance, that it could max a few titles at 720p. If it matches the GTX 950, then its even better. It also means Nvidia and AMD will have to start releasing better entry level cards, so budget gamers still win in absolutely every regard.

 

It will be amazing for consumers regardless how it pans out. I don't care which name is on the product, as long as we get faster parts in the long run. Gaming NUC's are just a step away from where we are at now. Custom SFF HTPC builds that are running games at 1080p medium 60fps, being able to carry one of these in the palm if your hand as you travel to a friends house for a LAN party, or to simply take into the family room to watch a movie on it, or play a game with the family (assuming you find a decent local co-op game on PC platform). 

 

I did not make my way to this thread to contribute any information to defend or attack a brand. Only agreed upon the notion that flops cannot be used to accurately determine how a GPU will perform on a game. You can try to draw parallels and it may indeed help you make an educated guess, but it is not a solid representation for it to actually matter in a discussion. I stand by my words from before on how well i think this product will perform. If it matches my expectations, i will be very happy. If it exceeds it, I will still be very happy. Win-Win either way.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 years ago? we had GTX 580 etc back then, these cards can game at 4K with lowered details, Iris cannot LOL, and Fermi is far faster in every direction, TFLOPS mean jack shit when crunching graphics.

FLOP = floating point operation.

AKA compute power. so to speak.

the less compute power, the weaker the card.

with more and more functions in games moving onto compute, FLOPs is important.

why do you think we saw a R9 290X get so close to a 980Ti in AotS???

because compute wise, the 290X is matched with the stock 980Ti.... yes, FLOP wise, they are matched. The 980Ti has more ROPs and TMUs which helps it pull ahead in raw graphics work.

Look at the upcoming DX11 based Rise of the Tomb Raider.

It uses Async Compute for several effects.

Link to comment
Share on other sites

Link to post
Share on other sites

It will be amazing for consumers regardless how it pans out. I don't care which name is on the product, as long as we get faster parts in the long run. Gaming NUC's are just a step away from where we are at now. Custom SFF HTPC builds that are running games at 1080p medium 60fps, being able to carry one of these in the palm if your hand as you travel to a friends house for a LAN party, or to simply take into the family room to watch a movie on it, or play a game with the family (assuming you find a decent local co-op game on PC platform).

 

Would the name of that step happen to be Skull Canyon? ;)

Link to comment
Share on other sites

Link to post
Share on other sites

the thing is. though, one need to not only look at architecture. but also process node.

Kaveri. it performs slightly better then the HD530. However the crucial thing to notice is that in BOTH Intel and AMDs cases, these iGPUs are taking up nearly 40-50% of the die area.

These are facts. Known, proven facts.

So when Intel is at 14nm and AMD is at 28nm. Then you gotta ask yourself. how many more transistors do intel need to match AMD?

Because lets face it, they can cram A LOT more transistors in there given that they had 14nm FF+....

The real kicker is not GT4e, which will match or beat a custom 750.

The real kicker is what happens once AMD and Nvidia can cram MUCH more transistors into their chips without suffering insane leakage issues.

This is why Carrizo is a laptop part. Because AMD simply couldnt get a good sample to run at higher then 35w on the desktop without having so much leakage, that it would effectively have to be downclocked and volted to the point of being worse then Kaveri.

Carrizo for desktop is coming in a couple months. Second, Intel currently uses way fewer transistors in its chips compared to AMD. Even the Skylake quads with GT2 are only ~2 billion transistors compared to the 3.1 billion in Carrizo quads. And number of transistors does not determine leakage issues.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Carrizo for desktop is coming in a couple months. Second, Intel currently uses way fewer transistors in its chips compared to AMD. Even the Skylake quads with GT2 are only ~2 billion transistors compared to the 3.1 billion in Carrizo quads. And number of transistors does not determine leakage issues.

There's no reliable evidence carrizo is coming to the desktop AFAIK. All the launch reviews were clear about it being laptop only. Wccf had of course confirmed that it would be coming to the desktop, until it didn't. And then they waited a while and started rumor milling about it again.
Link to comment
Share on other sites

Link to post
Share on other sites

5 years ago? we had GTX 580 etc back then, these cards can game at 4K with lowered details, Iris cannot LOL, and Fermi is far faster in every direction, TFLOPS mean jack shit when crunching graphics.

Loki are you sure about a 580 being able to output  4k because i cant find any where that shows it can. Max res. 2560 x 1600 the 590 doesnt seem to support over that either

 

gtx 680 is the first card that  seems to support over 2660 x 1600. 

Link to comment
Share on other sites

Link to post
Share on other sites

There's no reliable evidence carrizo is coming to the desktop AFAIK. All the launch reviews were clear about it being laptop only. Wccf had of course confirmed that it would be coming to the desktop, until it didn't. And then they waited a while and started rumor milling about it again.

No, there have been articles about Bristol Ridge for desktop coming to AM4 shortly to launch the motherboards ahead of Zen. And Zen is Raven Ridge. We've had this news in the last 3 weeks. Three SKUs were found, one that tops out at I believe 4.1 GHz.

Yup, from bench life, which has been incredibly reliable. http://wccftech.com/amd-bristol-ridge-apu-am4-desktop-fp4-mobility/

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

so good that you can't even browse the internet properly on one of those laptops

fx-8350 @4,4Ghz/sapphire r9 fury/2x 8gb Kingstone ddr3 @2030Mhz

Link to comment
Share on other sites

Link to post
Share on other sites

No, there have been articles about Bristol Ridge for desktop coming to AM4 shortly to launch the motherboards ahead of Zen. And Zen is Raven Ridge. We've had this news in the last 3 weeks. Three SKUs were found, one that tops out at I believe 4.1 GHz.

Yup, from bench life, which has been incredibly reliable. http://wccftech.com/amd-bristol-ridge-apu-am4-desktop-fp4-mobility/

 

Bristol Ridge is not Carrizo. You can tell by the way it's not called Carrizo.

Link to comment
Share on other sites

Link to post
Share on other sites

It's probably a true statement, considering at this point over 80% of Discrete GPUs are... well... Old. But is it a useful statement?

 

I'd say if you're constructing a desktop with a plan for gaming as a major use, you can fit a GPU into almost any budget.  Most of my friends who made builds on a tighter budget bought used 960's, for instance.

ExMachina (2016-Present) i7-6700k/GTX970/32GB RAM/250GB SSD

Picard II (2015-Present) Surface Pro 4 i5-6300U/8GB RAM/256GB SSD

LlamaBox (2014-Present) i7-4790k/GTX 980Ti/16GB RAM/500GB SSD/Asus ROG Swift

Kronos (2009-2014) i7-920/GTX680/12GB RAM/120GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

Loki are you sure about a 580 being able to output  4k because i cant find any where that shows it can. Max res. 2560 x 1600 the 590 doesnt seem to support over that either

 

gtx 680 is the first card that  seems to support over 2660 x 1600. 

DSR.

CONSOLE KILLER: Pentium III 700mhz . 512MB RAM . 3DFX VOODOO 3 SLi

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Bristol Ridge is not Carrizo. You can tell by the way it's not called Carrizo.

Yes it is. Summit Ridge is Zen CPU, and Raven Ridge is Zen APU.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yes it is. Summit Ridge is Zen CPU, and Raven Ridge is Zen APU.

 

No it's not. Carrizo is Carrizo, Bristol Ridge is Bristol Ridge. New product lineup even if it's mostly based on the same tech. Just like Richland isn't Trinity.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×