Jump to content

TSMC reportedly won't make extra capacity for intel

spartaman64
4 minutes ago, straight_stewie said:

More than a thread that refuses to acknowledge a blatant industrial espionage attempt when it sees it?

Well I doubt Intel is actually considering it, pitching the idea in a meeting is very different than actually making plans and assessing the viability of it. Rumors and speculation is just that, Toms Hardware could publish a story about how Intel is going to have geostationary orbital fab facilities but that doesn't actually mean it's true.

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, leadeater said:

Short or long term I don't think it actually changes much, longer term just won't be a thing because that basically means Intel exiting leading process technology all together. I think when people are saying long term they mean 2-5 years which actually isn't long term, that's single generation product life cycle. Process technology is so pivotal to Intel's architectures and designs I cannot see how it is actually possible to effectively use TSMC for processors without a giant list of potential and ongoing problems for both TSMC and Intel.

 

Node numbers are irrelevant, Intel's 14nm is market competitive right now which is just how good it has been and still is. There is no way Intel would have had such high performance products as well as low power laptop products as they have been over the multiple generations that have used 14nm using any other fab other than their own 14nm, and the same applies to 22nm. None of Intel's past success could have been done on anything other than their own, it's not a case of could they use someone else's it's the issue that what would have been achievable would be lesser. Sure Intel's 14nm is at the end of the road now but that is largely due to everyone else catching up.

 

This is why I say it is such a big problem for Intel to use TSMC for a leading processor design, there is a big assumption there that it would actually result in better processors than on Intel's 14nm. Even if it does that makes that product a dead end because there is no way Intel is going to give long term support to it without committing to using TSMC across multiple generations which I highly doubt will happen because that would call in to question if Intel should continue developing leading fabrication technology at all.

 

TSMC won't license 7nm to Intel and even if they did it's unlikely Intel could even do anything with it, is Intel supposed to just throw out what they have? Can the equipment they have be used? (probably). Where is Intel going to do their 7nm development? Are they?

 

Oh i, (almost), completely agree i was just commenting on how i read the original comment in this chain.

 

Intel's current position is really funky for most people's mindsets. Because of the production capacity realities intel is literally too big to fail right now, but at the same time they're really far up a deep creek without any hint of a paddle in sight right now. That could and probably will change. But they're in a really tough spot right now.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

orbital fab facilities but that doesn't actually mean it's true.

It doesn't really seem all that far fetched for an integrated circuit development firm to contract out some of it's manufacturing.

But now I'm wondering if low gravity environments have benefits for semiconductor manufacturing, so thanks for that 😛 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, straight_stewie said:

But now I'm wondering if low gravity environments have benefits for semiconductor manufacturing, so thanks for that 😛 

 

 

Almost certainly, but  it's way too expensive to do right now.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, straight_stewie said:

But now I'm wondering if low gravity environments have benefits for semiconductor manufacturing, so thanks for that 😛 

I was just thinking about dust and air quality or lack of air, maybe low gravity will help too 🤔

 

3 minutes ago, straight_stewie said:

It doesn't really seem all that far fetched for an integrated circuit development firm to contract out some of it's manufacturing.

It's not but if your need is for high performance market competitive then you have to be able to go to someone who can offer that that suits the needs you have, TSMC 7nm for Intel's designs could be worse than their own 14nm for all we know. Plus it is extremely doubtful Intel doesn't know how to get a 7nm process working, even competitive with TSMC. Intel is trying to do better and also cater specifically to their needs for their products and it has to basically be better in every way than what they have all the way through to the products made using it. Intel 10nm is denser than Intel 14nm, their 10nm offers more performance and even lower power but all of those are "It depends" which is the problem, those "depends" results in products with traits that are inferior to current products in ways that matter now for current software.

 

Doesn't very well do much good for Intel if they can have 16 core consumer desktop CPUs on 7nm that only clock around ~4Ghz which results in lower game performance on basically every game out right now. "Trust us future games will be better" probably won't work too well.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, mariushm said:

Yeah, there's no point in TSMC building more production lines / capacity (it would take 2-3 years and millions of dollars) for Intel to make some budget CPUs and chipsets at TSMC for a couple of years and then move back in house when they solve their production. They'd have a hard time recuperating their investment... and next year they're gonna be on 5nm anyway.

 

They freed some production capacity by not making chips for Huawei, but nvidia and amd probably already placed orders for the wafers available .... and amd / nvidia are not the only ones that need 7nm ... phone chips are huge market for example, fpgas, maybe camera sensors etc etc

TSMC can do a condition contract that if intel wants to manufacture there chips they have to sign for at least 15 years or so , I'm I right ? 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

<nitpick>The SI unit symbol for meter is a lower case m not an upper case</nitpick>

 

Dunno why but that capitol M annoys me more than it should lol

Because the capital M is miles?  1 nano mile is not much of a brag in terms of silicon.

 

Unless I have ballsed up it'd be something like 1609 nm   Which would make the die size about 1sqr  family sized pizza

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I know its incredibly unlikely but I'm beginning to fear for the future of Intel.

 

Within the last 2 to 3 years...

 

First they started losing component sale market share to AMD, next they started losing server sales market share, then we had speculative execution exploits, then it was OEM/laptop sales, then we had them back peddling on policies they had in place for decades to match AMDs offerings, more spec execs, then it was a delay to 10nm, then more spec execs, then outrage over shitty business practices and naming schemes designed to cause deliberate confusion, more spec execs, even more spec execs, then just as 10nm starts to drip out they announce 7nm is delayed, rich people hire rich lawyers over stock prices, the Chief Engineer announces he is resigning, the people who could help dig them out of this hole say they won't and can't help, employees go on record as saying upper management is destroying the company from the inside and the competition continues to push forward at breakneck speed.

 

If you wrote a fiction novel based around this story people would say it exaggerates to much and is too unrealistic yet its real and happening.

 

You gotta wonder how much more of a (mostly) self inflicted beating Intel can take before something gives. They need radical changes, from what I've been reading it sounds like the entire echelon of upper management needs booting and replacing for a start.

 

I know its VERY unlikely Intel will disappear any time soon.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

It's as you said, they can take a whole lot of beating.

The reality is, there are limited chip manufacturing resources, if you delete Intel from the equation, it doesn't matter if it's AMD or Intel buying capacity of other fabs, there won't be enough.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Loote said:

It's as you said, they can take a whole lot of beating.

The reality is, there are limited chip manufacturing resources, if you delete Intel from the equation, it doesn't matter if it's AMD or Intel buying capacity of other fabs, there won't be enough.

 

A way out for Intel could be to sell all their fab space to a rival and swap their own design to using the competitions instead. The industry wouldn't lose capacity, Intel would get a huge cash injection and with a little work could have 7nm out probably faster than their current projections suggest.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Master Disaster said:

A way out for Intel could be to sell all their fab space to a rival and swap their own design to using the competitions instead. The industry wouldn't lose capacity, Intel would get a huge cash injection and with a little work could have 7nm out probably faster than their current projections suggest.

Let's suppose Intel does what AMD did many years ago and spin off their fabs to a new company. Who knows the most about existing fabs? Intel people, so those would have to go to the new company. Outside people, even in semiconductor industry, wouldn't be experienced in them. I don't know where some people on this thread get the idea that if they put in some magic settings everything will work fine, that you can copy them from what are totally different implementations elsewhere. Just doesn't work that way. Intel's setup is in a certain direction, and to try to copy another process would likely mean changes so big you'll never make it in time while it is relevant. That's separate from it being costly.

 

As it was so long ago, if memory serves me correctly, when AMD did it, they essentially had no choice as they were about to run out of cash. It was a way to get funding to keep going. Intel are not even close to being that bad financially. AMD had far worse for far longer and survived. It's not great for Intel right now, but I wouldn't worry about their survival. There will be changes for sure.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Since there is so much talk on what Intel is doing at TSMC, there's a summary of best understanding at https://www.igorslab.de/en/intels-ponte-vecchio-xe-hpc-gpu-ponte-vecchio-is-not-produced-with-the-6-nm-process-by-tsmc/

 

Intel have signed a (new) contract with TSMC for 180000 wafers at 6nm (an update to TSMC 7nm), not believed to be related to Ponte Vecchio

 

Ponte Vecchio GPU die will be made on Intel 7nm and TSMC 5nm

Intel will make IO die for Ponte Vecchio, as well as RAMBO cache

Connectivity die was always planned to be made at TSMC, remains unchanged

 

Ponte Vecchio is the HPC GPU offering that will be included in the US supercomputer contract Intel won last year I think (AMD got the other one). There is speculation if Intel 7nm delays will impact the delivery of the supercomputer.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

There is speculation if Intel 7nm delays will impact the delivery of the supercomputer.

Although unlikely, because it was likely picked for a reason, imagine Intel losing that contract due to the delays and it goes elsewhere. That would be a big blow to Xe.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Although unlikely, because it was likely picked for a reason, imagine Intel losing that contract due to the delays and it goes elsewhere. That would be a big blow Xe.

It's not unusual for big projects anywhere to be either over budget and/or late. I'm sure there'll be something in the contract which states what happens. Cost will probably have to be eaten by Intel, but there may be some kind of penalty applied if it is late.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

I was just thinking about dust and air quality or lack of air, maybe low gravity will help too 🤔

 

The low gravity, it would hugely complicate some parts of the process and some things would have to be done diffrently, but most things that are "delicate" in manufacturing terms tend to benefit enormously from micro-gravity.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, porina said:

It's not unusual for big projects anywhere to be either over budget and/or late. I'm sure there'll be something in the contract which states what happens. Cost will probably have to be eaten by Intel, but there may be some kind of penalty applied if it is late.

Pretty sure Intel will do literally anything to keep it, way I see it that will be the make or break for Xe to get any traction in the market. Not many people actually want to be the first for something like that so it's rather important to have a reference customer.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

Pretty sure Intel will do literally anything to keep it, way I see it that will be the make or break for Xe to get any traction in the market. Not many people actually want to be the first for something like that so it's rather important to have a reference customer.

Isn't the plan to add Xe to consumer CPUs as iGPUs?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Pretty sure Intel will do literally anything to keep it, way I see it that will be the make or break for Xe to get any traction in the market. Not many people actually want to be the first for something like that so it's rather important to have a reference customer.

Xe is happening regardless. It'll be in the upcoming Tiger Lake mobile CPUs and Rocket Lake desktop CPUs. If it'll make more traction top down or bottom up, who knows. Either way, once the supercomputer is running it'll be a nice marketing thing to shout about for a bit.

 

Intel are also pushing oneAPI to ease programming across different scale devices, although I'm unclear who else is supporting it. The only other implementation I see is a software layer translating it to CUDA so it can also run on nvidia devices.

 

standards.png

https://xkcd.com/927/

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

Xe is happening regardless. It'll be in the upcoming Tiger Lake mobile CPUs and Rocket Lake desktop CPUs. If it'll make more traction top down or bottom up, who knows. Either way, once the supercomputer is running it'll be a nice marketing thing to shout about for a bit.

 

Intel are also pushing oneAPI to ease programming across different scale devices, although I'm unclear who else is supporting it. The only other implementation I see is a software layer translating it to CUDA so it can also run on nvidia devices.

 

standards.png

https://xkcd.com/927/

 

The big question is weather there's any basis to the rumours regarding DGU3. Though with the management shakeup that may be subject to change even if the rumours are true.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, porina said:

Xe is happening regardless. It'll be in the upcoming Tiger Lake mobile CPUs and Rocket Lake desktop CPUs. If it'll make more traction top down or bottom up, who knows. Either way, once the supercomputer is running it'll be a nice marketing thing to shout about for a bit.

 

Intel are also pushing oneAPI to ease programming across different scale devices, although I'm unclear who else is supporting it. The only other implementation I see is a software layer translating it to CUDA so it can also run on nvidia devices.

Bottom up will do nothing for Xe in HPC, just like Intel HD iGPU graphics does. Sure there are no "HPC Intel HD graphics" devices but it's not like that market does anything at all, case in point Radeon.

 

Radeon in HPC is barely a thing and it's for the exact same reasons Intel Xe will struggle without good reference customers to show there is any value in investing in the technology.

 

Edit:

As a customer you'll have Nvidia hardware and CUDA already, working and likely doing what you need. So it's actually not good enough just to have competitive hardware performance and a good software ecosystem, good in the perspective of no wide scale usage or community. The best way to get people to use something is to solve a problem that someone else isn't currently doing, but you need to prove it.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Bottom up will do nothing for Xe in HPC, just like Intel HD iGPU graphics does. Sure there are no "HPC Intel HD graphics" devices but it's not like that market does anything at all, case in point Radeon.

Ok, I wasn't clear you were specifically referencing HPC use case, as opposed to wider market.

 

I suppose even there we have to separate out HPC from supercomputers. The latter are more customised and fixed for their lifetime, so we see more variety in architectures used in those, like Power and Arm. Maybe I'm using the terms incorrectly I imagined HPC as a lower tier than supercomputers, like could provider level. I guess I'd agree in that space, people will want to see what Xe can do before they invest significantly in it.

 

7 minutes ago, CarlBar said:

The big question is weather there's any basis to the rumours regarding DGU3. Though with the management shakeup that may be subject to change even if the rumours are true.

I'm not familiar with the term DGU3 but I assume that's the discrete GPU based on Xe? I don't recall hearing much about it recently. Presumably it is way down on Intel's priorities for now. You know of any recent rumours in that area?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I still find it interesting that 10% of Steam Users run an Intel GPU of some sort.

 

https://store.steampowered.com/hwsurvey/videocard/

 

More people on Steam run a UHD620 than a 1070, 2080 Super & 2080 ti.

 

Kind of impressive for a company that doesn't have a dedicated GPU available and shows just how much ground they can gain through iGPUs.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, porina said:

Ok, I wasn't clear you were specifically referencing HPC use case, as opposed to wider market.

 

I suppose even there we have to separate out HPC from supercomputers. The latter are more customised and fixed for their lifetime, so we see more variety in architectures used in those, like Power and Arm. Maybe I'm using the terms incorrectly I imagined HPC as a lower tier than supercomputers, like could provider level. I guess I'd agree in that space, people will want to see what Xe can do before they invest significantly in it.

 

I'm not familiar with the term DGU3 but I assume that's the discrete GPU based on Xe? I don't recall hearing much about it recently. Presumably it is way down on Intel's priorities for now. You know of any recent rumours in that area?

 

 

As i understand it DGU1 is the successor architecture to XE, DGU2 is the successor to that and so on and so forth. Rumour says they canned DGU3.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I suppose even there we have to separate out HPC from supercomputers. The latter are more customised and fixed for their lifetime, so we see more variety in architectures used in those, like Power and Arm. Maybe I'm using the terms incorrectly I imagined HPC as a lower tier than supercomputers, like could provider level. I guess I'd agree in that space, people will want to see what Xe can do before they invest significantly in it.

Well the problem still applies to both, supercomputers are just large scale/implementations of HPC though. When ARM gets used it's because the chip was tailored for the purpose, or Power gets used because of memory subsystem and also CPU NVLink. The question of "what do we want to do" far as I know comes first then you go looking for something that best does it, which is where the really odd ball hardware comes from and anyone not involved is left wondering why, like I'm sure all those systems in the top 50 that use Xeon Phi have a good reason for it but I have no use for such hardware.

 

The majority of the top 500 are just standard Intel systems with Infinitband or RDMA Ethernet interconnections that may or may not have GPUs using openMPI and Slurm as the workload manager.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Master Disaster said:

I still find it interesting that 10% of Steam Users run an Intel GPU of some sort.

 

https://store.steampowered.com/hwsurvey/videocard/

 

More people on Steam run a UHD620 than a 1070, 2080 Super & 2080 ti.

 

Kind of impressive for a company that doesn't have a dedicated GPU available and shows just how much ground they can gain through iGPUs.

Pretty soon we will get intel cpu and gpu packages, oems are going to love that.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×