Jump to content

Intel plans to support VESA's Adaptive Sync

zMeul

Y...you're serious?! Really?

And how much do you think one of the CPUs with GT4e cost? And will it be possible to get them in laptops?

Edit: I forgot that Intel's most powerful iGPUs tend to be in laptops. And since there's supposedly a 50% improvement over Broadwell GT3e, wow...

Yes I'm serious. The release of the desktop "H" processors will be sometime in December or January. Intel is going to put the screws to Nvidia any way it can.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yes I'm serious. The release of the desktop "H" processors will be sometime in December or January. Intel is going to put the screws to Nvidia any way it can.

Any word on the arrival of laptop processors with GT4e? That's really what I care about at the moment. I'd love a laptop with powerful integrated graphics. <3

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

The GT3e SKU for Skylake will already match the 950. The GT4e SKU will match the 960.

Do you have even the slightest bit of proof to link here showing that this will happen? Because even Iris Pro 6200 cannot match a GTX 750 head to head, let alone a 750 Ti.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Do you have even the slightest bit of proof to link here showing that this will happen? Because even Iris Pro 6200 cannot match a GTX 750 head to head, let alone a 750 Ti.

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

 

Yes, very nice IGpu for a 4K HTPC too. Does Skylake motherboards have HDMI 2.0 yet?

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

Overclock the 750 and the gap between them will appear again. Maxwell GPU's overclock quite easily. While the iGPU's intel is bringing to the table is extremely impressive, the numbers you are projecting sounds extremely unlikely given the physics of it. Being able to match a 120w TDP GPU using an iGPU just sounds next to impossible. Now the GTX 750 has a 55w TDP, so that is extremely easy to achieve. The 950 is faster than a GTX 760 in most benches that i have seen, and is more than 30% faster than a GTX 750 Ti, which is already 15-20% faster than a 750 on average. A flat 20% boost over the current 6200 will not equate to a 950. It will just put it at 750 Ti levels. It will be a long time coming before i ever see an iGPU matching a GTX 960 (GTX 770).

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

 

dat 128MB L4 cache  :)

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, very nice IGpu for a 4K HTPC too. Does Skylake motherboards have HDMI 2.0 yet?

I know the chips have support. I don't know about the board designs though.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Overclock the 750 and the gap between them will appear again. Maxwell GPU's overclock quite easily. While the iGPU's intel is bringing to the table is extremely impressive, the numbers you are projecting sounds extremely unlikely given the physics of it. Being able to match a 120w TDP GPU using an iGPU just sounds next to impossible. Now the GTX 750 has a 55w TDP, so that is extremely easy to achieve. The 950 is faster than a GTX 760 in most benches that i have seen, and is more than 30% faster than a GTX 750 Ti, which is already 15-20% faster than a 750 on average. A flat 20% boost over the current 6200 will not equate to a 950. It will just put it at 750 Ti levels. It will be a long time coming before i ever see an iGPU matching a GTX 960 (GTX 770).

Every company which has doubted Intel's ability to make market-altering progress on their signature tech has wound up dead or beaten to a pulp. MOS Technologies was a fatality, and Texas Instruments and IBM barely escaped intact the first time Intel threw them all out of the server business in the 80s and 90s. Intel has had to dance around thousands of parents to get this far, but it still also has its superior production tech. On 2 node shrinks worth of lead you think it sounds impossible when the iGPU makes 2/3 of the 6700K's TDP already and we intend to get near 3x that performance before throwing on the eDRAM? I say you've grown blind to the changes happening around you. Intel is going to try to undermine Nvidia in both HPC and AIO consumer solutions to bleed it dry and buy it out when they can no longer compete. 20% boost over 6200 and then a 50% boost on top of that for the GT4e SKU. Keep your facts straight.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

this shouldn't come at a surprise since Intel was already interested in AMD's MANTLE and asked them - AMD said no 

Wasn't Mantle open source? They could've just "rented" the code :P

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

What did Nvidia do to get this from Intel?

ROG X570-F Strix AMD R9 5900X | EK Elite 360 | EVGA 3080 FTW3 Ultra | G.Skill Trident Z Neo 64gb | Samsung 980 PRO 
ROG Strix XG349C Corsair 4000 | Bose C5 | ROG Swift PG279Q

Logitech G810 Orion Sennheiser HD 518 |  Logitech 502 Hero

 

Link to comment
Share on other sites

Link to post
Share on other sites

great news for everyone probably will lower the price of g sync monitors

or better yet, kill them off entirely. The idea that if you buy a certain monitor a 200 USD feature on it only working with a certain brand of computer component just infuriates me. Breaks the entire concept of computer hardware interoperability regardless of brand.

Link to comment
Share on other sites

Link to post
Share on other sites

or better yet, kill them off entirely. The idea that if you buy a certain monitor a 200 USD feature on it only working with a certain brand of computer component just infuriates me. Breaks the entire concept of computer hardware interoperability regardless of brand.

 

Maybe people should stop complaining about the price on an enthousiast level product.

 

Other than that, I switch monitors and graphics cards pretty much every year, so obviously next year I will sell my 980Ti and Acer XB270HU for a Pascal card and hopefully a IPS 4K 120hz monitor.

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

Every company which has doubted Intel's ability to make market-altering progress on their signature tech has wound up dead or beaten to a pulp. MOS Technologies was a fatality, and Texas Instruments and IBM barely escaped intact the first time Intel threw them all out of the server business in the 80s and 90s. Intel has had to dance around thousands of parents to get this far, but it still also has its superior production tech. On 2 node shrinks worth of lead you think it sounds impossible when the iGPU makes 2/3 of the 6700K's TDP already and we intend to get near 3x that performance before throwing on the eDRAM? I say you've grown blind to the changes happening around you. Intel is going to try to undermine Nvidia in both HPC and AIO consumer solutions to bleed it dry and buy it out when they can no longer compete. 20% boost over 6200 and then a 50% boost on top of that for the GT4e SKU. Keep your facts straight.

This marks the second time you have gotten overaggressive with me when i ask for you to provide me with proof so i can validate your claims. The only blind person here is you. Your blind devotion towards Intel and everything they do has driven you to a point to where you are beyond reasoning with. Nothing you have provided has given any merit to your claims. I am just as eager to see iGPU's get faster. I have a personal stake in the SFF market, and i want to see your claims be true. However, i am not going to take your word for it when you have been wrong about it in the past.

 

The Iris 6200 is  not on par with the GTX 750. This is a solid, provable fact. To think that a 20% performance boost will make it fall on par with a GTX 950 is just asinine. Since you have already decided to insult me yet again for asking a simple question, we can just get this show on the road. Once again, ill start by providing proof to my claims that i have stated before, and cross referencing them with the claims you are making. 

 

 

The 950 is faster than a GTX 760 in most benches that i have seen, and is more than 30% faster than a GTX 750 Ti, which is already 15-20% faster than a 750 on average.

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-950-2GB-Review-Maxwell-MOBA/GPU-Comparisons-GTA-V-BF4-Bios

 

 

 

Our first title, GTA V, running at 1080p with Very High quality presets starts the GTX 950 down the path we thought it would be on: resting between the performance of the GTX 960 and the GTX 750 Ti with a heavy weight towards its GM206 cousin. Pulling in just under 40 FPS in our testing, the ASUS GTX 950 Strix is 15% slower than the GTX 950, 34% faster than the GX 750 Ti and 14% faster than the AMD R7 370.

 

 

Iris Pro 6200 at stock is only 10-15% behind the 750. Overclock 6200 and that gap disappears. With a flat 20% boost over the previous generation it'll line up perfectly with the 950 at stock. Mind you it will require top notch RAM like the Ripjaws V @ 3200 MHz or higher, but that's where the math lines up as of now.

Please, tell me how an iGPU that is currently slower than a GTX 750, will be able to match a card that is 30% faster than a 750 Ti, with just a 20% flat boost in performance? You are amazing when it comes to citing Intel's history, but you always fall short when anyone requests any real proof from you.

 

 

Every company which has doubted Intel's ability to make market-altering progress on their signature tech has wound up dead or beaten to a pulp. MOS Technologies was a fatality, and Texas Instruments and IBM barely escaped intact the first time Intel threw them all out of the server business in the 80s and 90s. Intel has had to dance around thousands of parents to get this far, but it still also has its superior production tech. On 2 node shrinks worth of lead you think it sounds impossible when the iGPU makes 2/3 of the 6700K's TDP already and we intend to get near 3x that performance before throwing on the eDRAM? I say you've grown blind to the changes happening around you. Intel is going to try to undermine Nvidia in both HPC and AIO consumer solutions to bleed it dry and buy it out when they can no longer compete. 20% boost over 6200 and then a 50% boost on top of that for the GT4e SKU. Keep your facts straight.

This math still does not equate GTe4 to a GTX 950. Not even close if we are talking even raw float performance. At 1ghz, The GTe4 will be capable of 1152 GFLOPS. The GTX 750 Ti's float performance is 1728. Tell me how these two numbers are even remotely close to each other? 

 

Seeing as we have absolutely no other metrics to compare these two (unless you have information not available to the public) we can only compare the FP numbers. Feel free to provide some evidence to support your claims. Though, knowing you, i know exactly what you are going to do. You are going to cite how X company doubted Intel before, and X company ended up bankrupt, and that Intel is superior at everything ever, and that i should put my faith into history, and not tangible evidence. My only advice going forward, is to remember how the last argument you and i had ended. I am still waiting on that receipt.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Wasn't Mantle open source? They could've just "rented" the code :P

MANTLE isn't open source at all

Link to comment
Share on other sites

Link to post
Share on other sites

MANTLE isn't open source at all

Oh I thought it was.

Link to comment
Share on other sites

Link to post
Share on other sites

What did Nvidia do to get this from Intel?

Apart from renege on a lot of patents in the original license deal, they've just been a thorn in Intel's side for years. They'll be trampled underfoot where it matters most and have the rest taken by force.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This marks the second time you have gotten overaggressive with me when i ask for you to provide me with proof so i can validate your claims. The only blind person here is you. Your blind devotion towards Intel and everything they do has driven you to a point to where you are beyond reasoning with. Nothing you have provided has given any merit to your claims. I am just as eager to see iGPU's get faster. I have a personal stake in the SFF market, and i want to see your claims be true. However, i am not going to take your word for it when you have been wrong about it in the past.

The Iris 6200 is not on par with the GTX 750. This is a solid, provable fact. To think that a 20% performance boost will make it fall on par with a GTX 950 is just asinine. Since you have already decided to insult me yet again for asking a simple question, we can just get this show on the road. Once again, ill start by providing proof to my claims that i have stated before, and cross referencing them with the claims you are making.

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-950-2GB-Review-Maxwell-MOBA/GPU-Comparisons-GTA-V-BF4-Bios

Please, tell me how an iGPU that is currently slower than a GTX 750, will be able to match a card that is 30% faster than a 750 Ti, with just a 20% flat boost in performance? You are amazing when it comes to citing Intel's history, but you always fall short when anyone requests any real proof from you.

This math still does not equate GTe4 to a GTX 950. Not even close if we are talking even raw float performance. At 1ghz, The GTe4 will be capable of 1152 GFLOPS. The GTX 750 Ti's float performance is 1728. Tell me how these two numbers are even remotely close to each other?

Seeing as we have absolutely no other metrics to compare these two (unless you have information not available to the public) we can only compare the FP numbers. Feel free to provide some evidence to support your claims. Though, knowing you, i know exactly what you are going to do. You are going to cite how X company doubted Intel before, and X company ended up bankrupt, and that Intel is superior at everything ever, and that i should put my faith into history, and not tangible evidence. My only advice going forward, is to remember how the last argument you and i had ended. I am still waiting on that receipt.

GT3e 6200 = 1

750 = 1.2

760 <==> 950 = 1.3 * 1.2 = 1.56

GT4e = 1.2 * 1.5 = 1.8

There, mathematical proof based on the performance difference between a 750 an Iris Pro 6200 with the 20% architectural boost and the 50% core boost at the same clocks. Hell there's even room to account for imperfect scaling. Skylake GT4e will match and surpass the 950 before overclocks, and Intel's graphics have always over locked insanely well. 5200 often boosted to 1.65GHz through boot camp.

How many times are you going to needlessly challenge me and have your tail end handed to you before you learn? There's no source needed when we know the facts as they are. Inductive reasoning is a valid, strong proof strategy both in mathematics and informal logic. I suggest you remember that.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe people should stop complaining about the price on an enthousiast level product.

 

Other than that, I switch monitors and graphics cards pretty much every year, so obviously next year I will sell my 980Ti and Acer XB270HU for a Pascal card and hopefully a IPS 4K 120hz monitor.

So you enjoy paying 200 USD over the exact same monitor to only lock you in to one hardware brand? you can get a ROG Swift with an IPS display for 600 USD on amazon right now

http://www.amazon.com/ASUS-MG279Q-Screen-LED-Lit-Monitor/dp/B00ZOO348C/ref=sr_1_1?ie=UTF8&qid=1440284192&sr=8-1&keywords=MG279Q

BUT if you want that monitor with a G-Sync mondule you pay 200 USD more

http://www.amazon.com/ASUS-MG279Q-Screen-LED-Lit-Monitor/dp/B00ZOO348C/ref=sr_1_1?ie=UTF8&qid=1440284192&sr=8-1&keywords=MG279Q

 

while on sale atm ( difference of 100 USD) the MG lists for 599 while the PG lists for 799. On top of that the PG is currently the TN version of the ROG Swift and the IPS version has yet to be released or priced. No one was complaining about the price of enthusiast grade products (as I plan on buying multiple MG279Q's). And you are probably one of a VERY select few who buy new monitors "every year" (which I doubt you do). I still use a 1080p 60 Hz monitor from 4+ years ago (Asus VW246H) and there isn't anything out there at the 1080p level that makes me even think of upgrading as nothing is worth the price to purely replace this monitor.

 

Not only the fact that proprietary hardware like a G-Sync module are bad for consumers from a future upgrade perspective, Nvidia already has drivers that utilize the Adaptive Sync standard so why not allow them to be used on desktop?

Link to comment
Share on other sites

Link to post
Share on other sites

GT3e 6200 = 1

750 = 1.2

760 <==> 950 = 1.3 * 1.2 = 1.56

GT4e = 1.2 * 1.5 = 1.8

There, mathematical proof based on the performance difference between a 750 an Iris Pro 6200 with the 20% architectural boost and the 50% core boost at the same clocks. Hell there's even room to account for imperfect scaling. Skylake GT4e will match and surpass the 950 before overclocks, and Intel's graphics have always over locked insanely well. 5200 often boosted to 1.65GHz through boot camp.

How many times are you going to needlessly challenge me and have your tail end handed to you before you learn? There's no source needed when we know the facts as they are. Inductive reasoning is a valid, strong proof strategy both in mathematics and informal logic. I suggest you remember that.

How many times do you think you have handed my ass to me? Because last i checked, i put you in your place in the only other argument we have ever had. You made some lame excuse about you keeping your receipts in a physical location, and you were not home, and you somehow had no access to any online proof of your purchases. Then the argument subsided as you had no more ground to stand on.

 

Also, where on gods earth do these numbers even come from? They don't even mean anything.

 

"GT3e 6200 = 1" 1 what? What does that 1 mean? "760 <==> 950 = 1.3 * 1.2 = 1.56" i can't even comprehend what this piece of information is. You can't just throw random numbers without providing context as to what they are.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

How many times do you think you have handed my ass to me? Because last i checked, i put you in your place in the only other argument we have ever had. You made some lame excuse about you keeping your receipts in a physical location, and you were not home, and you somehow had no access to any online proof of your purchases. Then the argument subsided as you had no more ground to stand on.

 

Also, where on gods earth do these numbers even come from? They don't even mean anything.

 

"GT3e 6200 = 1" 1 what? What does that 1 mean? "760 <==> 950 = 1.3 * 1.2 = 1.56" i can't even comprehend what this piece of information is. You can't just throw random numbers without providing context as to what they are.

if you can't tell those number are % performance compared to a base piece of hardware (it the iGPU in a 6200) I don't know what to say. it's an extremely common practice in showing relative performance instead of just posting random FPS values.

Link to comment
Share on other sites

Link to post
Share on other sites

if you can't tell those number are % performance compared to a base piece of hardware (it the iGPU in a 6200) I don't know what to say.

If that is the case, its the biggest lie i have ever seen on this forum.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×