Jump to content

Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD

Intel-Foundry-2024-with-Pat-Gelsinger.thumb.jpg.e0304c915b8714385cc1ab5e56de6748.jpg

 

Quotes

Quote

Intel CEO Pat Gelsinger held a question and answer session here today at the company's IFS Direct Connect 2024 event, and in response to a question from Tom's Hardware, he reiterated that Intel is willing to build chips for anyone -- including long-time rival AMD.

 

"We hope that that includes Jensen (Nvidia), Christiano (Qualcomm), and Sundar (Google), and you heard today it includes Satya (Microsoft), and I even hope that includes Lisa (AMD) going forward. I mean, we want to be the foundry for the world, and if we're going to be the Western foundry at scale, we can't be discriminating about who’s participating in that. So, unequivocally, it is to be the foundry for the world. Commit supply chains, your leadership technology - the doors to the ala carte menu are wide open for the industry."

 

There were other signs of cats and dogs living together at the Intel Foundry event, too: Intel Foundry head Stu Pann called long-time rival Arm the company's most important customer, and then invited Arm CEO Rene Haas to the stage for a joint presentation. We certainly couldn't have seen that coming five years ago. In fact, Intel is already working on fabbing Arm Neoverse processors. 

 

How the tables have turned, indeed. A decade ago such proposition would have been unthinkable, but neither the competitive environment was that aggressive on all fronts. Also, Nvidia's market evaluation alone is several times that of AMD, Intel and Qualcomm combined.

 

Source:

https://www.tomshardware.com/pc-components/cpus/intel-ceo-pat-gelsinger-i-hope-to-build-chips-for-lisa-su-and-amd

Link to comment
Share on other sites

Link to post
Share on other sites

If Intel's 2nm and 1.8nm nodes are on track for production stateside I could see business decisions like this leading to a renaissance for the company.
Right now it looks like they'll be able to leapfrog TSMC next year when they get their cards in order. 

Link to comment
Share on other sites

Link to post
Share on other sites

I hope intel dont become a fab for AMD. not because they are rivals, but because i know for a fact that any time AMD releases a product that might not be as good as people hoped it would be, the argument would be "well clearly intel sabotaged it at the fab level".

 

But yes, we do need more foundries across the world, the reliance on TSMC is quite scary

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, TrigrH said:

that's one way to completely know what your competitor is doing at all times...

Though that's not primary reason. Primary reason is making money directly off competitors that would otherwise just eat into your marketshare and make you no money. Imagine all the Ryzen and Radeon chips sold now. They not only compete with Intel's products directly, Intel makes nothing from that.

 

If Intel fabs Ryzen or Radeon chips however, they are still competition for final products, but with every Ryzen or Radeon sold, Intel makes money from that too. It's basically a win-win situation for Intel. Not sure how conflict of interests is resolved or how AMD would keep design secrets from Intel meddling with it. After all, they'd see preproduction batches and designs of AMD chips and they are in business of making same things as direct competitors. Question is, if AMD would even be willing to go with them because of this very reason.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, RejZoR said:

s basically a win-win situation for Intel. Not sure how conflict of interests is resolved or how AMD would keep design secrets from Intel meddling with it. After all, they'd see preproduction batches and designs of AMD chips and they are in business of making same things as direct competitors.

I would think they might be forced to spin off the foundry business as a subsidiary with so many NDAs and contracts that prevent the foundry side from sharing anything to the chip design side from other customers.

Link to comment
Share on other sites

Link to post
Share on other sites

This might be the best argument for spinning off the fabs I've heard so far. If it happens at all it could be a long process.

 

Intel winning over AMD could be a phased approach. For instance, AMD could use them for less cutting edge design parts. Especially in the chiplet era, this could for example include IOD, MCD, Vcache. APUs tend to get released much later than the original chiplet design so that would cut down any potential time for analysis before generational architecture release. AMD don't produce on the absolute leading edge of process tech anyway, which is more the Apple and other mobile chip territory who are less in direct competition with other Intel products.

 

Not limited to AMD either. Similar problems could be with nvidia with GPU/AI space offerings.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

Not sure how conflict of interests is resolved or how AMD would keep design secrets from Intel meddling with it. After all, they'd see preproduction batches and designs of AMD chips and they are in business of making same things as direct competitors. Question is, if AMD would even be willing to go with them because of this very reason.

Intel would already mostly know anyway and the directions both companies have taken are so vastly different there isn't anything they could just bring across and use. Tiles vs chiplets vs whatever only sounds similar at face value but really they are very different and the internal data connections as well as how to interconnect Tiles/Chiplets aren't the same or interchangeable. 

 

And anyway AMD's Infinity Fabric is just an evolution of Hypertransport which was an open standard used by both IBM and Apple, Intel has all the general knowledge of how it works as it is.

 

Knowing what and how someone is doing something doesn't mean you copy them or even change your own plans, you can think or legitimately have the better path, or at a minimum one better for your own product.

Link to comment
Share on other sites

Link to post
Share on other sites

I see especially likely that Intel could manufacture IO dies for AMD using maybe the Intel 7 and Intel 4 processes as those gets used less for the high end dies.

 

Intel still manufactures a lot with TSMC

 

Intel 3 node and intel 18 and Intel 14 nodes look promising 

 

Overall I like Intel's trajectory, to get its FABs back into being competitive. It's not good to have only one manufacturer of high end chips, it will end badly if anything goes wrong with it. I'm happy Intel is keeping up investment into fabs (subsidized in no small aprt by the USA and the EU).

 

Technology wise it's the comeback I was hoping to see from Intel. The ribbonfet, backside power delivery, and finally the tile packaging really come together to mix and match processes. I like we are moving away from monolithic dies for CPUs.

 

From intel, i would like them to improve on their low power nodes to manufacture mobile Intel CPUs that are more competitive with AMD, and I would like for them to manufacture Battlemage GPU dies in house for their APUs and L3 cache dies with the Intel 3 process.

 

Another thing I'd like to see, is development of GPU tiles. I think a GPU with discrete dies for the cores, stacked 3D cache dies and the Apple solution to glue dies from their side, have the potential to make GPU dies more scalable. E.g. A 5050 with one CUDA die, a 5060 with two CUDA dies, a 5070 with three CUDA dies, the 5080 with four CUDA dies, and the 5090 with up to six CUDA dies. and memory channels that scale with the number of CUDA dies. With only the base die that change based on the GPU model.

 

Overall it's good for Intel to decouple its fabs from its processors, it will avoid a repeat of the 14nm stagnation that let AMD catch up.

 

Global Foundries has given up on the bleeding edge. The west is doing everything it can to slow down China's domestic fabs, still, eventually we can end up having three bleeding edge fabs: TSMC, SMIC and Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 05032-Mendicant-Bias said:

For context, it takes time from when fabs are up to when products using it are out. Zen 4 came out on N5 about the time N3 entered High Volume Manufacturing, with N3 being available for risk production before that. We still don't have an x86 adjacent product on 3nm class today with Zen 5 expected to be the first to do so when it is rumoured to be released later this year.

 

So Intel will continue to use TSMC for a while yet. Their leading edge 20A/18A nodes are due to be "done" by end of this year, with Arrow Lake I'm suspecting getting a high end only launch using 20A to showcase it, but it wont be volume part. Maybe we can see more Intel 3 offerings.

 

1 hour ago, 05032-Mendicant-Bias said:

Another thing I'd like to see, is development of GPU tiles. I think a GPU with discrete dies for the cores, stacked 3D cache dies and the Apple solution to glue dies from their side, have the potential to make GPU dies more scalable.

EMIB and Foveros could enable this but Intel still needs to get their GPUs polished before aiming for the highest end. I'm not sure on the connection density currently offered by EMIB vs the TSMC thing Apple uses.

 

1 hour ago, 05032-Mendicant-Bias said:

Global Foundries has given up on the bleeding edge. The west is doing everything it can to slow down China's domestic fabs, still, eventually we can end up having three bleeding edge fabs: TSMC, SMIC and Intel.

SMIC? I thought Samsung would be the other in top 3.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Intel would already mostly know anyway and the directions both companies have taken are so vastly different there isn't anything they could just bring across and use. Tiles vs chiplets vs whatever only sounds similar at face value but really they are very different and the internal data connections as well as how to interconnect Tiles/Chiplets aren't the same or interchangeable. 

 

And anyway AMD's Infinity Fabric is just an evolution of Hypertransport which was an open standard used by both IBM and Apple, Intel has all the general knowledge of how it works as it is.

 

Knowing what and how someone is doing something doesn't mean you copy them or even change your own plans, you can think or legitimately have the better path, or at a minimum one better for your own product.

I don't think that works so superficially, just because Infinity Fabric is like Hypertransport, it's suddenly irrelevant.  I meant more impactful things, things that make Ryzen chips so much more efficient than Intel's while still top performing. Meanwhile Intel is STILL doing the same old MOAR POWER and MOAR CLOCKS. It's basically all they've been doing for generations since like 6th generation. And we're at 14th now. Only somewhat innovative thing was hybrid cores design, but it's still very messy because it has to rely on Windows scheduler and they made a huge mess with AVX. Meanwhile AMD has really shattered the industry, first with chiplet design and later with X3D. Both seemingly primitive "features", but they are the features that matter. Especially X3D is just mindblowing for gaming systems. Buying anything other than 5800X3D or 7800X3D is just stupid if you ask me, they are that good.

 

I think if Intel knew what AMD was doing exactly, they wouldn't be playing catchup the way they are still doing since launch of first Ryzen, years ago.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, RejZoR said:

I don't think that works so superficially, just because Infinity Fabric is like Hypertransport, it's suddenly irrelevant.

When did I say it's irrelevant? All I said was Intel just can't go ahead and use it even if they have the technical standards on how the protocol works and then actually see how it's implemented in chips, which they are skilled and talented enough to find out/know already.

 

EMIB is actually outright a better technology but also covers more than what Infinity Fabric does so it's not like for like. Such is why Intel never tried to copy AMD, they were working on something better that was just taking longer.

 

28 minutes ago, RejZoR said:

I meant more impactful things, things that make Ryzen chips so much more efficient than Intel's while still top performing.

Intel just chooses to run their chips at higher power limits which skews the efficiency. You can limit Intel CPUs to the same power as AMD's PPT and the difference isn't so great, Intel does it to capture that higher performance because that's what really matters on the desktop side in benchmarks, being at the top.

 

For mobile Intel laptops were actually more efficient in reality up until Ryzen Mobile 6000 and then AMD wasn't always more, Intel U series still did better at that lower power level for performance.

 

Also all AMD chiplet products have actually quite bad idle/low power efficiency due to the IOD and the interconnect.

 

28 minutes ago, RejZoR said:

eanwhile Intel is STILL doing the same old MOAR POWER and MOAR CLOCKS

So what AMD have also been doing to Zen since that came out 😉

 

Zen(x) hasn't gotten lower in power draw over each generation, it's gotten higher.

 

It's really not as black and white as you seem to be thinking nor has Intel being doing so little. Intel also has had the vastly superior core architecture highly optimize for performance, you're complaining that a race spec engine isn't able to gain lots of HP each year when anything at all is actually quite a feat in the first place. AMD on the other hand was starting from a mid 2000 4 cylinder economy engine, potential and ease of improvement was vast.

 

Intel's problem is as we have just seen in the last few months that if there is any performance regression at all in any way they get criticized for it so they can't really make their core architecture more power efficient if that means a reduction in performance.  So that either means going with a hybrid architecture or waiting out for better fab processes where you alter your product targets towards efficiency rather than performance like they had been doing in the past.

 

Never forget AMD was in such a bad position they effectively started from the beginning which offers up a lot of freedom of design and little to no prior expectations. 

 

Why is it everyone thinks these companies:

  • Don't know what each other are doing
  • Trying to steal legally protected patented technology that require public documentation on what it is and how it works
  • They want to copy what each other are doing down to these low levels
  • That the above point is even possible at all

If it were so simple and effective to rip off what another is doing then Cyrix would still exist.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

Meanwhile Intel is STILL doing the same old MOAR POWER and MOAR CLOCKS. It's basically all they've been doing for generations since like 6th generation. And we're at 14th now.

They had problems with their process, and that recovery path is only now coming to an end. There have been gains in that era, but they are generally lagging AMD in process node usage up to now. I did go into this in my previous post. CPU releases at the end of this year and into next year will be the ones to watch.

 

1 hour ago, RejZoR said:

Meanwhile AMD has really shattered the industry, first with chiplet design and later with X3D.

These are more manufacturing than design features. Of course, they have to be designed in to make use of them, but a chipset CPU doesn't work better than a hypothetical similar monolithic implementation. The benefit at large scales is manufacturability and cost. X3D is a nice idea in concept but arguably needed more by AMD due to how their chips are designed. A more cohesive CPU like Intel's, while it would still benefit from it, maybe a bit less so.

 

1 hour ago, RejZoR said:

Buying anything other than 5800X3D or 7800X3D is just stupid if you ask me, they are that good.

5800X3D is showing its age. Great if you're stretching out the life of AM4, but makes little sense to get today otherwise. The 6 core 7600X beats it in gaming on average and much cheaper (ignoring platform cost for now).

 

7800X3D is the average top gaming GPU, but Intel's best are near enough same on average and will beat it outside of gaming.

 

1 hour ago, RejZoR said:

I think if Intel knew what AMD was doing exactly, they wouldn't be playing catchup the way they are still doing since launch of first Ryzen, years ago.

See previous on process. That's the reason. The designs are there, the manufacturing isn't.

 

It is unlikely for Intel to be able to get a meaningful technical advantage by knowing what AMD are doing in advance at manufacturing stage. Designs are started years in advance and reacting once it reaches manufacturing isn't going to give you time to do much about it. At best, Intel could gain a marketing advantage by targeting their offerings against AMD.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, porina said:

They had problems with their process, and that recovery path is only now coming to an end. There have been gains in that era, but they are generally lagging AMD in process node usage up to now. I did go into this in my previous post. CPU releases at the end of this year and into next year will be the ones to watch.

 

These are more manufacturing than design features. Of course, they have to be designed in to make use of them, but a chipset CPU doesn't work better than a hypothetical similar monolithic implementation. The benefit at large scales is manufacturability and cost. X3D is a nice idea in concept but arguably needed more by AMD due to how their chips are designed. A more cohesive CPU like Intel's, while it would still benefit from it, maybe a bit less so.

 

5800X3D is showing its age. Great if you're stretching out the life of AM4, but makes little sense to get today otherwise. The 6 core 7600X beats it in gaming on average and much cheaper (ignoring platform cost for now).

 

7800X3D is the average top gaming GPU, but Intel's best are near enough same on average and will beat it outside of gaming.

 

See previous on process. That's the reason. The designs are there, the manufacturing isn't.

 

It is unlikely for Intel to be able to get a meaningful technical advantage by knowing what AMD are doing in advance at manufacturing stage. Designs are started years in advance and reacting once it reaches manufacturing isn't going to give you time to do much about it. At best, Intel could gain a marketing advantage by targeting their offerings against AMD.

Intel's node has nothing to do with it. Their approach does. Intel's fab node is clearly very competent no matter what people say, how else would they push out such crazy clocks and have been for a long while now if it wasn't? Issue is, their processor logic is so far behind they have to push their CPU's so far beyond the sweet spot they perform so poorly in efficiency. I never argued Intel's performance, they still deliver it. But at what cost, that's the different story. Imagine if Intel could run 14900K at current clocks that would be the sweet spot and not 30% above it? They'd be freaking amazing. Instead it's great, but a furnace.

 

Also, you can't just ignore platform cost to make an argument. That's literally a reason why I decided for 5800X3D. I mostly play games and whatever general purpose stuff I do, it's still plenty fast. I just couldn't justify investing in a new motherboard and RAM just to upgrade CPU. I just couldn't. I checked the reviews and in general, uplift in games was on par with latest gen CPU's from both AMD and Intel. I spent 380€ on brand new 5800X3D back then (that was back in 2022!). I sold my 5800X for 250€. I effectively spent 130€ for the upgrade that leveled my last gen system with top end stuff being sold at the time. For 130€. Doing anything else would be just plain stupid. And one may say upgrading 5800X to 5800X3D was stupid, but there was no other upgrade path really. Buying 5900X or 5950X wouldn't really make my games run faster, but 5800X3D does.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, RejZoR said:

Intel's node has nothing to do with it.

It is very much the node that is the limiting factor. They've been making great designs on a not so leading node. That doesn't make it a bad node, but when push comes to shove, that difference shows.

 

Unfortunately I don't own either a recent Intel CPU nor Zen 4, so I can't do my own testing like I did in the past. Following is Zen 2 vs Skylake. Intel 14nm vs TSMC N7. Zen 2 takes a slight lead in IPC from being a newer design, but they're not so different. Note I consider Zen 2 as the point when AMD managed to really overtake Intel in the Ryzen era as earlier Zen was more lacking and only offset by throwing cores at the problem.

I can't find it right now, but I separately did testing of Coffee Lake vs Zen 2 at various power limits. Zen 2 was clearly superior there at normal operating points for both. IPC could be considered indicative of the microarchitecture design, and power efficiency from the node. Of course, the two are related so they do factor into each other and are not totally isolated.

 

8 minutes ago, RejZoR said:

Also, you can't just ignore platform cost to make an argument.

I did that because I didn't want to end up looking up prices for someone else to say in their region it is something different. My point remains, I do not feel that buying new today 5800X3D + mobo + DDR4 ram makes any sense when you can buy 7600X + mobo + DDR5 ram for what is likely not that different a total, assuming the mobo chosen is comparable in quality. The newer platform performs similar or better in gaming while leaving the door open to future upgrades. I did acknowledge that someone upgrading on existing AM4 platform might see more value in 5800X3D.

 

The 5800X3D is still a great CPU, for gaming or otherwise. But out of today's offerings it is unremarkable and nothing special.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly tsmc being as big as it is the biggest nation security issue ever. We are literally on the verge of most of the worlds chips behind taken by a hostile nation. Not to dive too much into politics but something needs to be done.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/22/2024 at 5:38 PM, Lunar River said:

I hope intel dont become a fab for AMD. not because they are rivals, but because i know for a fact that any time AMD releases a product that might not be as good as people hoped it would be, the argument would be "well clearly intel sabotaged it at the fab level".

Although this won't solve the issue for people's perceptions of AMD's performance for their chips, I hope AMD does have some allowance that says they can come in and inspect Intel's process, or the products that are being built - as far as I'm concerned, they reserve that right because it's their product that is being built. Intel would just be contracted to build those products. 

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, porina said:

It is very much the node that is the limiting factor. They've been making great designs on a not so leading node. That doesn't make it a bad node, but when push comes to shove, that difference shows.

 

Unfortunately I don't own either a recent Intel CPU nor Zen 4, so I can't do my own testing like I did in the past. Following is Zen 2 vs Skylake. Intel 14nm vs TSMC N7. Zen 2 takes a slight lead in IPC from being a newer design, but they're not so different. Note I consider Zen 2 as the point when AMD managed to really overtake Intel in the Ryzen era as earlier Zen was more lacking and only offset by throwing cores at the problem.

I can't find it right now, but I separately did testing of Coffee Lake vs Zen 2 at various power limits. Zen 2 was clearly superior there at normal operating points for both. IPC could be considered indicative of the microarchitecture design, and power efficiency from the node. Of course, the two are related so they do factor into each other and are not totally isolated.

 

I did that because I didn't want to end up looking up prices for someone else to say in their region it is something different. My point remains, I do not feel that buying new today 5800X3D + mobo + DDR4 ram makes any sense when you can buy 7600X + mobo + DDR5 ram for what is likely not that different a total, assuming the mobo chosen is comparable in quality. The newer platform performs similar or better in gaming while leaving the door open to future upgrades. I did acknowledge that someone upgrading on existing AM4 platform might see more value in 5800X3D.

 

The 5800X3D is still a great CPU, for gaming or otherwise. But out of today's offerings it is unremarkable and nothing special.

No, my point was, when you already have AM4 MOBO+RAM (like I did), buying 5800X3D makes absolute sense and swapping entire platform for AM5 MOBO+RAM absolutely didn't, gains/advantages would just be too small for the investment. It was a slightly smaller upgrade for me since I went from 5800X to 5800X3D, but imagine someone running 3600X and upgrading to 5800X3D for primarely gaming. The upgrade would be absolutely massive with minimal investment. Plus this was rather rare opportunity to actually make an upgrade of CPU. Usually you just buy highest end model and you don't ever upgrade it until you just swap it with entire new platform.

 

I remember doing actual CPU upgrades on existing mobo like 2 times in last 24 years. Upgrading from Core 2 Duo E4300 to E5200. And now with 5800X to 5800X3D. The rest were whole platform upgrades/swaps that usually lasted for around 5 years each.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

No, my point was, when you already have AM4 MOBO+RAM (like I did), buying 5800X3D makes absolute sense and swapping entire platform for AM5 MOBO+RAM absolutely didn't, gains/advantages would just be too small for the investment.

Then we argued over nothing. I said if you had AM4 then 5800X3D still has value, but a new build today 5800X3D isn't the best choice.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, porina said:

Then we argued over nothing. I said if you had AM4 then 5800X3D still has value, but a new build today 5800X3D isn't the best choice.

I guess we have then.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/23/2024 at 2:38 AM, TrigrH said:

that's one way to completely know what your competitor is doing at all times...

eh, if the companies are as split as it sounds like they are, its doesn't really

The foundry is being spun off, but being under the same intel board. 
this also frees up the product department to choose whatever fab is best for them.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/22/2024 at 10:49 PM, RVRY said:

If Intel's 2nm and 1.8nm nodes are on track for production stateside I could see business decisions like this leading to a renaissance for the company.
Right now it looks like they'll be able to leapfrog TSMC next year when they get their cards in order. 

Intel 20A and 18A shouldn't be called 2 nm and 1.8 nm. It's a marketing name standing for new key technologies and densitiy and efficiency gains, not the actual size. E. g. Intel 20A sees the introduction of RibbonFET and PowerVIA.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, HenrySalayne said:

It's a marketing name standing for new key technologies and densitiy and efficiency gains, not the actual size.

That applies to all fabs. The numbers have not represented a physical feature size for a long time.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, porina said:

That applies to all fabs. The numbers have not represented a physical feature size for a long time.

Absolutely, but it gets even more confusing if you convert made up numbers who are just a marketing term to different units. 🤷

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×