Jump to content

why could intel not just go with 12nm

Go to solution Solved by Coaxialgamer,

Ultimately , 12 nm is just a label at this point .

Globalfoundries' 12nm tech was not much more than an optimized variant of their 14nm tech.

If you're suggesting intel go with GF 12nm , then you should know that intel's 14nm is ultimately superior to 12nm GF . It's faster and denser ( or at least just as much), not to mention a shift to that tech would require porting all their designs to work with GF's process.

 

But i don't think that's what you mean .

Do you mean " Why doesn't intel introduce a stopgap process until 10nm , with a less aggressive shrink ".

The answer :

-Making a dedicated process from scratch requires a lot of money , which would be foolish for something which is essentially a short lived stopgap

-They already are... sort of :

Intel's improved 14nm processes (14nm+ and ++)  fit this description in a way . What they did was take the existing 14nm process and add many of the features they were planning to introduce in 10nm. Single dummy gates and COAG were originally planned to debut in 10nm , but were introduced in these 14nm processes , which reduced size and increase performance . They didn't do much for power by themselves , but that was mostly dealt with by standard node improvements as expected of a mature process.

Instead of 10nm could they just go with 12nm they are doing too much of a leap. And at least they could work on 10 nm in the meantime and there won't be too many chips clogging up their foundries, and they would not be too far behind also there are many other little benefits with this. What do you think?

Link to comment
Share on other sites

Link to post
Share on other sites

i think intel could have mastered 10nm much earlier if they had invested their money into development instead of bribing companies to not sell or use AMD CPU´s.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Pixel5 said:

i think intel could have mastered 10nm much earlier if they had invested their money into development instead of bribing companies to not sell or use AMD CPU´s.

Yeah, that's probably true. That bribing, however, is paying off big time right now. For the first time in a decade, AMD has legitimately competitive offerings in the consumer market, but Intel still holds a massive portion of market share. 

Main PC:

AMD Ryzen 7 5800X • Noctua NH-D15 • MSI MAG B550 Tomahawk • 2x8GB G.skill Trident Z Neo 3600MHz CL16 • MSI VENTUS 3X GeForce RTX 3070 OC • Samsung 970 Evo 1TB • Samsung 860 Evo 1TB • Cosair iCUE 465X RGB • Corsair RMx 750W (White)

 

Peripherals/Other:

ASUS VG27AQ • G PRO K/DA • G502 Hero K/DA • G733 K/DA • G840 K/DA • Oculus Quest 2 • Nintendo Switch (Rev. 2)

 

Laptop (Dell XPS 13):

Intel Core i7-1195G7 • Intel Iris Xe Graphics • 16GB LPDDR4x 4267MHz • 512GB M.2 PCIe NVMe SSD • 13.4" OLED 3.5K InfinityEdge Display (3456x2160, 400nit, touch). 

 

Got any questions about my system or peripherals? Feel free to tag me (@bellabichon) and I'll be happy to give you my two cents. 

 

PSA: Posting a PCPartPicker list with no explanation isn't helpful for first-time builders :)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Pixel5 said:

i think intel could have mastered 10nm much earlier if they had invested their money into development instead of bribing companies to not sell or use AMD CPU´s.

Yeah but with Intel being Intel, from what I can gather they want to be the main company in the market. A bit like the apple strategy if i'm honest.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 1kv said:

Yeah but with Intel being Intel, from what I can gather they want to be the only ones in the market. A bit like the apple strategy if i'm honest.

Well, yes and no. Apple's philosophy is about making products easier for the consumer, at the cost of more expensive offerings, and loss of right-to-repair. Intel seems to really not care what you do with your CPU, as long as you pay them. 

Main PC:

AMD Ryzen 7 5800X • Noctua NH-D15 • MSI MAG B550 Tomahawk • 2x8GB G.skill Trident Z Neo 3600MHz CL16 • MSI VENTUS 3X GeForce RTX 3070 OC • Samsung 970 Evo 1TB • Samsung 860 Evo 1TB • Cosair iCUE 465X RGB • Corsair RMx 750W (White)

 

Peripherals/Other:

ASUS VG27AQ • G PRO K/DA • G502 Hero K/DA • G733 K/DA • G840 K/DA • Oculus Quest 2 • Nintendo Switch (Rev. 2)

 

Laptop (Dell XPS 13):

Intel Core i7-1195G7 • Intel Iris Xe Graphics • 16GB LPDDR4x 4267MHz • 512GB M.2 PCIe NVMe SSD • 13.4" OLED 3.5K InfinityEdge Display (3456x2160, 400nit, touch). 

 

Got any questions about my system or peripherals? Feel free to tag me (@bellabichon) and I'll be happy to give you my two cents. 

 

PSA: Posting a PCPartPicker list with no explanation isn't helpful for first-time builders :)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Pixel5 said:

i think intel could have mastered 10nm much earlier if they had invested their money into development instead of bribing companies to not sell or use AMD CPU´s.

yeah they are a multibillion dollar company and their strategy is kind of stupid now their i7 i5 and i3 chips and their server chips are clogging their 14nm ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ foundrys 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, bellabichon said:

Well, yes and no. Apple's philosophy is about making products easier for the consumer, at the cost of more expensive offerings, and loss of right-to-repair. Intel seems to really not care what you do with your CPU, as long as you pay them. 

That's sorta what i was getting at. I meant as in they've tried to heavily promote themselves and try to block out the competition, with my example being Intel as apple and AMD being android. Android's been on the rise with the Note 9 and OP6 recently and that's why I sorta thought up that idea. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Somekid5 said:

yeah they are a multibillion dollar company and their strategy is kind of stupid now their i7 i5 and i3 chips and their server chips are clogging their 14nm ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ foundrys 

I think there was a whole lot more involved in their conference meetings about strategy then just should we go 10, 12, 1nm....  These things take YEARS of planning, funding, research, reinvention, etc etc etc.  

 

They didn't expect (or put enough money into bribing AMD personnel) AMD to perform as they did.  Infact, I'll wager 5 years ago no one would have confidence that AMD would have been here (I'm sure AMD employees scuttlebutted a boatload about WTF are we doing?!). Normal business stuff.

 

You take educated guesses, a bit of risk (especially when leading an industry), and if it works you're just doing okay and if it fails... you get random speculation about how stupid your strategy was :P

 

Work in Corporate at the management level or C level, and you'll understand a lot more than just a forum can shed light on.

 

Edit:  As the poster after me said, it's just a label.  Think about cars.  Should everyone go larger engines for more power?  Bigger doesn't mean better.  In chips, smaller doesn't mean better either.  Refinement is a huge part of things.

"Do what makes the experience better" - in regards to PCs and Life itself.

 

Onyx AMD Ryzen 7 7800x3d / MSI 6900xt Gaming X Trio / Gigabyte B650 AORUS Pro AX / G. Skill Flare X5 6000CL36 32GB / Samsung 980 1TB x3 / Super Flower Leadex V Platinum Pro 850 / EK-AIO 360 Basic / Fractal Design North XL (black mesh) / AOC AGON 35" 3440x1440 100Hz / Mackie CR5BT / Corsair Virtuoso SE / Cherry MX Board 3.0 / Logitech G502

 

7800X3D - PBO -30 all cores, 4.90GHz all core, 5.05GHz single core, 18286 C23 multi, 1779 C23 single

 

Emma : i9 9900K @5.1Ghz - Gigabyte AORUS 1080Ti - Gigabyte AORUS Z370 Gaming 5 - G. Skill Ripjaws V 32GB 3200CL16 - 750 EVO 512GB + 2x 860 EVO 1TB (RAID0) - EVGA SuperNova 650 P2 - Thermaltake Water 3.0 Ultimate 360mm - Fractal Design Define R6 - TP-Link AC1900 PCIe Wifi

 

Raven: AMD Ryzen 5 5600x3d - ASRock B550M Pro4 - G. Skill Ripjaws V 16GB 3200Mhz - XFX Radeon RX6650XT - Samsung 980 1TB + Crucial MX500 1TB - TP-Link AC600 USB Wifi - Gigabyte GP-P450B PSU -  Cooler Master MasterBox Q300L -  Samsung 27" 1080p

 

Plex : AMD Ryzen 5 5600 - Gigabyte B550M AORUS Elite AX - G. Skill Ripjaws V 16GB 2400Mhz - MSI 1050Ti 4GB - Crucial P3 Plus 500GB + WD Red NAS 4TBx2 - TP-Link AC1200 PCIe Wifi - EVGA SuperNova 650 P2 - ASUS Prime AP201 - Spectre 24" 1080p

 

Steam Deck 512GB OLED

 

OnePlus: 

OnePlus 11 5G - 16GB RAM, 256GB NAND, Eternal Green

OnePlus Buds Pro 2 - Eternal Green

 

Other Tech:

- 2021 Volvo S60 Recharge T8 Polestar Engineered - 415hp/495tq 2.0L 4cyl. turbocharged, supercharged and electrified.

Lenovo 720S Touch 15.6" - i7 7700HQ, 16GB RAM 2400MHz, 512GB NVMe SSD, 1050Ti, 4K touchscreen

MSI GF62 15.6" - i7 7700HQ, 16GB RAM 2400 MHz, 256GB NVMe SSD + 1TB 7200rpm HDD, 1050Ti

- Ubiquiti Amplifi HD mesh wifi

 

Link to comment
Share on other sites

Link to post
Share on other sites

Ultimately , 12 nm is just a label at this point .

Globalfoundries' 12nm tech was not much more than an optimized variant of their 14nm tech.

If you're suggesting intel go with GF 12nm , then you should know that intel's 14nm is ultimately superior to 12nm GF . It's faster and denser ( or at least just as much), not to mention a shift to that tech would require porting all their designs to work with GF's process.

 

But i don't think that's what you mean .

Do you mean " Why doesn't intel introduce a stopgap process until 10nm , with a less aggressive shrink ".

The answer :

-Making a dedicated process from scratch requires a lot of money , which would be foolish for something which is essentially a short lived stopgap

-They already are... sort of :

Intel's improved 14nm processes (14nm+ and ++)  fit this description in a way . What they did was take the existing 14nm process and add many of the features they were planning to introduce in 10nm. Single dummy gates and COAG were originally planned to debut in 10nm , but were introduced in these 14nm processes , which reduced size and increase performance . They didn't do much for power by themselves , but that was mostly dealt with by standard node improvements as expected of a mature process.

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×