Jump to content

Low-end Intel Xe Graphics card specs & performance detailed in benchmarks

Results45

Alleged specs of both a 96EU DG1-LP dev kit graphics card (likely the one unveiled at CES) and an unnamed prototype 120EU card were shown on Geekbench and SiSoft Sandra back on March 4th, February 27th, and February 13th.

 

Then as of yesterday, a second benchmark results post of the 96EU card showed up on SiSoft Sandra reaffirming the specifications: https://ranker.sisoftware.co.uk/show_run.php?q=c2ffcaf8debfdee3d1e7d0f684b989afcaaf92a284f7caf2

 

Without further ado, here are the leaked specs:

 

96EU Card (DG1-LP/SDV)

  • 786 stream processors (8 per EU)
  • 1500Mhz boost clock
  • 3GB VRAM

 

120EU Card

  • 960 stream processors (8 per EU)
  • 1000Mhz (base clock?)
  • 9.4GB VRAM

 

It's going to be interesting to see how these stack up against the Tiger Lake 96EU iGPU, Vega8/Ryzen 4800U combo, MX350, and MX450.

 

More likely specs detailed on TechPowerUp HERE.

 

 

UPDATE: Tiger Lake iGPUs perform 60-80% faster per EU than Ice Lake in 3DMark Fire Strike.

 

Tiger Lake-U Core i3 Fire Strike score comparison with Ice Lake Core i3-1005G1. (Image Source: @_rogame on Twitter)

 

Intel Tiger Lake 3DMark Fire Strike results in comparison with Ice Lake Iris Plus Graphics G4 and G7.

https://www.notebookcheck.net/Intel-Tiger-Lake-Core-i3-Fire-Strike-Graphics-score-indicates-Gen12-Xe-can-finally-offer-some-competition-to-AMD-Vega-iGPUs.461602.0.html

https://www.notebookcheck.net/Intel-Tiger-Lake-Gen12-Xe-iGPU-offers-significantly-better-performance-than-Ice-Lake-Gen11-at-same-TDP-finally-surpasses-Vega-8-in-AMD-Renoir.466087.0.html

Link to comment
Share on other sites

Link to post
Share on other sites

eh, if these have the same problem as Iris Pro graphics, then they will be great for benchmarks but trash in actual games due to optimization (driver side or application side idk)

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty sure I can't say what the correct specs are and what is specifically right or wrong here (former Intel dGPU employee). But I will say based purely on information gathered from the internet:

Given the EU counts of confirmed parts, and other leaks which have come in the past, 120 EUs seems rather strange. 

 

For a card so marginally more powerful (actually probably less given the clock speeds) than a DG1 (whose performance has been seen by the public, even if in a pre formal release state), the second card has no business having that much RAM
 

Main Rig: R9 5950X @ PBO, RTX 3090, 64 GB DDR4 3666, InWin 101, Full Hardline Watercooling

Server: R7 1700X @ 4.0 GHz, GTX 1080 Ti, 32GB DDR4 3000, Cooler Master NR200P, Full Soft Watercooling

LAN Rig: R5 3600X @ PBO, RTX 2070, 32 GB DDR4 3200, Dan Case A4-SFV V4, 120mm AIO for the CPU

HTPC: i7-7700K @ 4.6 GHz, GTX 1050 Ti, 16 GB DDR4 3200, AliExpress K39, IS-47K Cooler

Router: R3 2200G @ stock, 4GB DDR4 2400, what are cases, stock cooler
 

I don't have a problem...

Link to comment
Share on other sites

Link to post
Share on other sites

So, while I totally understand the sentiment, you can be darn sure intel isn't going to waste ram if it isn't needed/useful for some specific client use case. (or not have as much as what is reported if some user doesn't need explicitly that) They aren't exactly ones to add wasteful circuitry or additional features/functionality their users don't effectively force them to add hahaha.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Can’t wait for this to outcompete NVIDIA

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Jurrunio said:

eh, if these have the same problem as Iris Pro graphics, then they will be great for benchmarks but trash in actual games due to optimization (driver side or application side idk)

I've seen a lot of people say this but I have rarely seen any benchmarks to back it up with.

Do you or anyone else in this thread have any actual evidence that Intel's drivers makes their GPUs perform worse than they should for gaming?

 

In gaming benchmarks, the Iris Plus 655 performs somewhere between a Vega 6 and Vega 8.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/9/2020 at 6:27 PM, LAwLz said:

I've seen a lot of people say this but I have rarely seen any benchmarks to back it up with.

Do you or anyone else in this thread have any actual evidence that Intel's drivers makes their GPUs perform worse than they should for gaming?

 

I've been playing Command & Conquer: Generals, War Thunder, and WoWs on Core i3 and Core i5 laptops with HD 520/530 and UHD 620/630 iGPUs for the past 4 years.

 

C&C: Generals gets a solid ~30fps at 1600x900 on "medium" settings across the board with mild stuttering & framedrops in cutscenes and when there's dozens of units/effects onscreen.

 

In War Thunder and WoWs peaks at 40-45fps when there's not much going on and ranges between 25-35fps during gameplay at 1366x768 (HD 520/UHD 620) or 1600x900 (HD 530/630). I get somewhat playable framrates either on the "low" settings preset or by turning everything all the way down (off if possible) except texture resolution, draw distance detail, default FXAA, and default particle effects.

 

Sure Intel (driver-side) and game devs (driver support) could probably further optimize performance on integrated graphics (they likely will), but as far as my experiences go you more or less get what you pay for.

 

Same goes for the other 70% of PC systems that sport solely iGPUs.

Edited by Results45
AFAIK "optimizations" include Variable Rate Rendering, VulkanAPI, scaling/sharpening via deep learning, and a Volume Forward+ with BVH rendering pipeline (check out the following page for more info): https://www.3dgep.com/volume-tiled-forward-shading/
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

I've seen a lot of people say this but I have rarely seen any benchmarks to back it up with.

Do you or anyone else in this thread have any actual evidence that Intel's drivers makes their GPUs perform worse than they should for gaming?

 

In gaming benchmarks, the Iris Plus 655 performs somewhere between a Vega 6 and Vega 8.

https://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested

 

Rather old but there is data. I'm sure you can find more if not limited to Iris Pro.

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, tarfeef101 said:

Pretty sure I can't say what the correct specs are and what is specifically right or wrong here (former Intel dGPU employee). But I will say based purely on information gathered from the internet:

Given the EU counts of confirmed parts, and other leaks which have come in the past, 120 EUs seems rather strange. 

 

For a card so marginally more powerful (actually probably less given the clock speeds) than a DG1 (whose performance has been seen by the public, even if in a pre formal release state), the second card has no business having that much RAM
 

 

I wouldn't be remotely surprised if it's a stand in for currently non-existent high end silicon for letting developers optimize for the higher frame buffers of such silicon when it is ready. That or it's reporting the maximum memory the onboard memory controller can support rather than what is actually present.

 

I think it's also important to remember that these are software development platforms, they're almost certainly  not remotely intended to be a commercial product so their performance figures in absolute terms aren't really very relevant.

Link to comment
Share on other sites

Link to post
Share on other sites

Exciting to see their plans and future direction for the Xe architecture, finally substantial improvements to integrated graphics 

ʕ•ᴥ•ʔ

MacBook Pro 13" (2018) | ThinkPad x230 | iPad Air 2     

~(˘▾˘~)   (~˘▾˘)~

Link to comment
Share on other sites

Link to post
Share on other sites

Its facing the 1650 max-Q and 5500M, not the 1650 and 570. 

 

...also, 1650 for performance and 570 for pricing? Wut? The 570 is a head and sholders above the 1650 in performance.

 

Anyway, yeah it's laptop only, and by the nature of it being an Intel product will make it to many designs because Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

Following up on the comment I forgot I made:

 

Yeah I just have to make sure to only talk about things I infer/speculate based on information that's been released. It's definitely a balance most people wouldn't bother trying to strike. 

 

Yes absolutely DG1 was marketed as a development vehicle for ISVs, not sold to consumers. Whatever that other thing is, it's not something that's been marketed at consumers either so who knows perhaps it is real and there's a use case for that. Even still though, 9.4 is a very weird number. Memory chips aren't generally sold in denominations of .4GB. The most chips you usually see on a chip is 12, and between 1 and 12, 9.4 doesn't divide into an amount people generally sell memory chips in. 

Main Rig: R9 5950X @ PBO, RTX 3090, 64 GB DDR4 3666, InWin 101, Full Hardline Watercooling

Server: R7 1700X @ 4.0 GHz, GTX 1080 Ti, 32GB DDR4 3000, Cooler Master NR200P, Full Soft Watercooling

LAN Rig: R5 3600X @ PBO, RTX 2070, 32 GB DDR4 3200, Dan Case A4-SFV V4, 120mm AIO for the CPU

HTPC: i7-7700K @ 4.6 GHz, GTX 1050 Ti, 16 GB DDR4 3200, AliExpress K39, IS-47K Cooler

Router: R3 2200G @ stock, 4GB DDR4 2400, what are cases, stock cooler
 

I don't have a problem...

Link to comment
Share on other sites

Link to post
Share on other sites

I for one am excited for these when they do come to market. I don't expect amazing graphics performance, I do except them to excel at 'work' stuff like accelerating encoding/decoding and compute tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Jurrunio said:

https://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested

 

Rather old but there is data. I'm sure you can find more if not limited to Iris Pro.

What am I suppose to be looking for in that review exactly? 

Remember what I was asking for, evidence they the hardware is held back by the drivers in games. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, LAwLz said:

What am I suppose to be looking for in that review exactly? 

how GT 650M and GT 640 pull further ahead in games than in synthetics?

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Jurrunio said:

how GT 650M and GT 640 pull further ahead in games than in synthetics?

Can you please give me some examples? I don't feel like reading through the entire review to verify if you're correct. 

Link to comment
Share on other sites

Link to post
Share on other sites

As far as I know, Intel historically downclocks their iGPUs to save on power ~ even in desktop chips.

 

LowSpecGamer proved this when he overclocked one from 1200Mhz (some models boost even lower at 1050-1150Mhz) to 1650Mhz. Turning up the boost clocks in general will net 15-40% fps gains! ^_^

.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, huilun02 said:

75W discrete card will be facing the GTX 1650 for performance and RX 570 for pricing

Good luck Intel

 

We don't know the power draw of these. However some info Intel released suggests strongly the 96EU model is significantly sub 75w with there most likely value probably being 25w.Again neither of these is intended to be a commercial product that competes with anything from NVIDIA or AMD.They're software development platforms.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, CarlBar said:

 

We don't know the power draw of these. However some info Intel released suggests strongly the 96EU model is significantly sub 75w with there most likely value probably being 25w.Again neither of these is intended to be a commercial product that competes with anything from NVIDIA or AMD.They're software development platforms.

 

Well if Intel intends to pit competitive options against AMD in budget to mid-range desktop APUs then wouldn't they have to pair 15-40W versions of the same 10-25 watt 64/96EU iGPUs from 15-45W Ice Lake & Tiger Lake laptops into 65-95W Rocket and Alder Lake desktop CPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

For those talking about power consumption of a 96EU Xe GPU, here is an article showing Tiger Lake (a mobile chip) with a 96EU GPU:

 

https://www.anandtech.com/show/15380/i-ran-off-with-intels-tiger-lake-wafer-who-wants-a-die-shot

 

Thay should imply that it is capable of being very low TDP, at the very least.

Main Rig: R9 5950X @ PBO, RTX 3090, 64 GB DDR4 3666, InWin 101, Full Hardline Watercooling

Server: R7 1700X @ 4.0 GHz, GTX 1080 Ti, 32GB DDR4 3000, Cooler Master NR200P, Full Soft Watercooling

LAN Rig: R5 3600X @ PBO, RTX 2070, 32 GB DDR4 3200, Dan Case A4-SFV V4, 120mm AIO for the CPU

HTPC: i7-7700K @ 4.6 GHz, GTX 1050 Ti, 16 GB DDR4 3200, AliExpress K39, IS-47K Cooler

Router: R3 2200G @ stock, 4GB DDR4 2400, what are cases, stock cooler
 

I don't have a problem...

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/9/2020 at 3:51 PM, tarfeef101 said:

Pretty sure I can't say what the correct specs are and what is specifically right or wrong here (former Intel dGPU employee). But I will say based purely on information gathered from the internet:

Given the EU counts of confirmed parts, and other leaks which have come in the past, 120 EUs seems rather strange. 

 

For a card so marginally more powerful (actually probably less given the clock speeds) than a DG1 (whose performance has been seen by the public, even if in a pre formal release state), the second card has no business having that much RAM
 

 

On 5/9/2020 at 4:18 PM, Curufinwe_wins said:

So, while I totally understand the sentiment, you can be darn sure intel isn't going to waste ram if it isn't needed/useful for some specific client use case. (or not have as much as what is reported if some user doesn't need explicitly that) They aren't exactly ones to add wasteful circuitry or additional features/functionality their users don't effectively force them to add hahaha.

Seems reasonable for you both to be right here.    There’s something odd about those numbers but that doesn’t mean they’re mistakes.   Iirc there were statements by intel that these things were not for gaming but were specific use industrial cards.  That 9.4 bf I find particularly telling.  There’s got to be a specific reason for that number and it has nothing to do with gaming.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

Can you please give me some examples? I don't feel like reading through the entire review to verify if you're correct. 

The whole source is the example. I dont like giving examples without actual hardware on my hands

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/10/2020 at 4:59 PM, LAwLz said:

Can you please give me some examples? I don't feel like reading through the entire review to verify if you're correct. 

 

21 hours ago, Jurrunio said:

The whole source is the example. I dont like giving examples without actual hardware on my hands

 

Yeah it really varies game to game and GPU to GPU. The Iris Pro 5200 was neck-to-neck with the GT 640/650M in Crysis: Warhead, Sleeping Dogs, and Metro: Last Light.

 

And an overclocked HD/UHD 630 is easily on-par with a GT 740:

 

It's kinda sad that it took a comeback from AMD for Intel to finally introduce mainstream integrated graphics faster than the GT 650M. Otherwise it would still be the case that only $1000+ PCs, MacBooks, and luxery ultrabooks have decent* iGPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Jurrunio said:

The whole source is the example. I dont like giving examples without actual hardware on my hands

No you can't just say a very specific thing like "Intel GPUs are held back by drivers" and then when asked to provide a source go "well i don't want to give examples without owning the hardware myself". 

You can't just link an entire review and go "somewhere in this there is evidence that I am right, but you have to find it, no me". 

 

I want you to post a few benchmarks which shows inconsistency that points towards driver issues and not hardware related bottlenecks. 

Also, the article is 7 years old. You don't think Intel has been working on their drivers since then? A ton of things might have changed. 

I think the whole "Intel has bad drivers" is one of those sayings which people blindly believe because they have heard other say it, without having any evidence of it being true. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Results45 said:

 

Well if Intel intends to pit competitive options against AMD in budget to mid-range desktop APUs then wouldn't they have to pair 15-40W versions of the same 10-25 watt 64/96EU iGPUs from 15-45W Ice Lake & Tiger Lake laptops into 65-95W Rocket and Alder Lake desktop CPUs?

 

I'm trying to parse what you just typed and i can;t come up with anything that makes sense. You seem to switch from talking about integrated to discreet graphics half way through.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×