Jump to content

TSMC reportedly won't make extra capacity for intel

spartaman64
22 minutes ago, CarlBar said:

As i understand it DGU1 is the successor architecture to XE, DGU2 is the successor to that and so on and so forth. Rumour says they canned DGU3.

Is that a typo? I've not managed to find anything about DGU. From what I can tell in a look around, DG1 is the 1st gen Xe on 10nm, and so far there's a dev board that's going around and has been shown. DG2 is the next gen on Intel 7nm (and according to a link in a post I made somewhere, also TSMC 5nm). Rumour sites put that as 2022 availability. I haven't seen anything beyond that so far.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, porina said:

More clarification from Dr Wafer Eater:

 

The linked article is mostly behind a paywall though, which I don't have access to.

 

"Intel is one of TSMC’s biggest customers and has been for decades. They have many products at TSMC over the years, mainly chipsets and other things on ‘older’ processes.

Some of this was due to capacity, some of it was due to Intel’s process lead, and some of it was because TSMC had IP or a process tuned for something Intel didn’t, think RF for starters. Intel also used outsourcing as a weapon, in the 40nm days they would buy wafer starts and use them for things they didn’t strictly need to in order to deprive the competition of cutting edge, for them, silicon."

What a shitty company. I'm glad they are having troubles now

Link to comment
Share on other sites

Link to post
Share on other sites

More possible insight on what might be going on inside Intel. I do not follow this tweeter but he's been around the tech side for a while.

 

 
Edit: I note he is followed by Intel Graphics, and Dr Wafer Eater of Anandtech.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, porina said:

More possible insight on what might be going on inside Intel. I do not follow this tweeter but he's been around the tech side for a while.

 

 
 
Edit: I note he is followed by Intel Graphics, and Dr Wafer Eater of Anandtech.

 

image.png.42077209768cd58c0a3739ec276b1966.png

i guessed that the high density they are trying to achieve is the problem its good to see that im not completely talking out of my ass lol

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, porina said:

Is that a typo? I've not managed to find anything about DGU. From what I can tell in a look around, DG1 is the 1st gen Xe on 10nm, and so far there's a dev board that's going around and has been shown. DG2 is the next gen on Intel 7nm (and according to a link in a post I made somewhere, also TSMC 5nm). Rumour sites put that as 2022 availability. I haven't seen anything beyond that so far.

 

Possibly, i heard about it whilst listening to a video so i'm going of memory and how i remember hearing it. Either way though apparently in tels discrete graphics is pretty much dead development wise according to the rumours after a few generations.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CarlBar said:

 

Possibly, i heard about it whilst listening to a video so i'm going of memory and how i remember hearing it. Either way though apparently in tels discrete graphics is pretty much dead development wise according to the rumours after a few generations.

Intel are a bit like MS in that they can through money at these types of projects in fits and spurts over time,  it's one of those things where they know there is money in it, it's just a matter of plugging away when they can until they are ready to go the full monty. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/28/2020 at 12:31 PM, Benji said:

I don't even think it's that, why would a company decline money? Like a preceding comment, I would rather assume that they are only using this as a temporary measure to bridge the time that it takes to fix the issues with their own manufacturing process. And knowing Intel, they do have the theoretical potential to fix it. And then they're gone. And, as you know, TSMC produces chips of manufacturers that don't have their own fabs. They probably can't even expect to have a longer-term contract with Intel, because once they figure out their issues, they'll be quick to save their money by manufacturing themselves as usual and say "Buh-bye!". They see themselves as "rescuers" as stated in that article, and due to Intels very advanced manufacturing technology (which is always ahead of the competition like the here mentioned TSMC) Intel probably wouldn't even be interested in producing more products at TSMC. Why if their product is theoretically and even practically better? TSMC knows this so they're sort of buffering off Intels manufacturing and nothing more.

I wouldn't hold my breath on Intel overtaking TSMC in node technology at this point. Intel has struggled with both 10nm and 7nm while TSMC is already on 5nm at this point. Now I know there are some people who will point out that 7nm from Intel vs 7nm from TSMC are different but tbh it doesn't even matter if Intels 7nm is better than TSMC 7nm because by the time intel has 7nm out the door TSMC will be way past 7nm. Intel have been stuck on the same processing node for ages now and it is showing especially in the heat and power consumption. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Brooksie359 said:

I wouldn't hold my breath on Intel overtaking TSMC in node technology at this point. Intel has struggled with both 10nm and 7nm while TSMC is already on 5nm at this point. Now I know there are some people who will point out that 7nm from Intel vs 7nm from TSMC are different but tbh it doesn't even matter if Intels 7nm is better than TSMC 7nm because by the time intel has 7nm out the door TSMC will be way past 7nm. Intel have been stuck on the same processing node for ages now and it is showing especially in the heat and power consumption. 

isnt intel betting on stacking now though?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, pas008 said:

isnt intel betting on stacking now though?

Honestly I think they are throwing a hail Mary at this point. Maybe they come back from this but honestly it's hard to say if they will be able to dominate the process node space anytime soon. I am will to bet that at most they will be able to get close to the competition but definitely not surpass them. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, CarlBar said:

Possibly, i heard about it whilst listening to a video so i'm going of memory and how i remember hearing it. Either way though apparently in tels discrete graphics is pretty much dead development wise according to the rumours after a few generations.

Trying to make some sense out of that scenario, and struggling. If Intel were planning to give up on DGPUs in the future, it doesn't make a lot of sense to continue with it in the shorter term either, unless they have fixed contracts in place to make the things. I wonder if it is more like a misunderstanding. For example, they might be thinking, with all the changes going on, the old plans are out (which is what the rumour might be), and a new plan will be put in place (which the rumour source is not aware of). I haven't seen anything to this effect on the likes of wccftech or videocardz, and you know they report on the slightest hint of anything happening, unless I just missed it.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, porina said:

unless they have fixed contracts in place to make the things.

 

Aren't there allready a couple of signed supercomputers going to use them.

 

Like i said with Murthy gone it's subject to change. The supposedly canned one was the first one that was all Raja's design so it's possibble it's canning was a part of Murthy's manicaitions since apparently he was trying to push Raja out ad finding some excuse he could pin on Raja for canning it would have helped him there.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, CarlBar said:

Aren't there allready a couple of signed supercomputers going to use them.

 

Like i said with Murthy gone it's subject to change. The supposedly canned one was the first one that was all Raja's design so it's possibble it's canning was a part of Murthy's manicaitions since apparently he was trying to push Raja out ad finding some excuse he could pin on Raja for canning it would have helped him there.

Sorry, when you said DGPUs I thought you were talking consumer level. I don't think Intel have said anything public beyond Ponte Vecchio, which is going into at least one supercomputer. I did post somewhere observations allegedly from a former Intel employee, and it does sound like there was some infighting going on. I didn't work for Intel, but internal politics is certainly one area I don't miss from "big business" working.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

intel can have tsmc 7nm capacity when everyone else moves to 5nm.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/28/2020 at 8:28 PM, mariushm said:

 

Modern processes like 7nm, 12nm, 14nm don't work well with high voltages like 3.3v, 5v etc and the chipset needs to work with 5v standby (the motherboard can have voltage regulators to lower the 5v to something lower like 3.3v or 2.5v but some parts of chipset still have to tolerate higher voltages) so it's also more convenient to use higher process nodes for these chips.

 

Sorry but that point is simply not true. Converting 5V down is an absolute peace of cake and no consideration at all. Then just because your IO pads may need to work with 3.3V (I am pretty sure only a few, most IO is now 1.8V or lower) by no means justifies an older tech node. The advantages you get within the core of the chip are far more important.

 

And 5-8W can be a lot of juice for e.g. laptops. I agree that chipsets are not made in edge-cutting <10nm tech but sth as old as 65nm? Heck no.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×