Jump to content

IBM unveils world’s first 7nm chip prototype

zMeul

source:http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/

 

19298819448_d13bb30926_o.jpg

19310802280_87590e0f8d_o.jpg

18734617954_450fe07427_o.jpg

IBM, working with GlobalFoundries, Samsung, SUNY, and various equipment suppliers, has produced the world's first 7nm chip with functional transistors. While it should be stressed that commercial 7nm chips are still at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it's a working sub-10nm chip (this is pretty significant in itself); it's the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography.

This is a 7nm test chip, built at the IBM/SUNY (State University of New York) Polytechnic 300mm research facility in Albany, NY. The transistors are of the FinFET variety, with one significant difference over commercialised FinFETs: the channel of the transistor is a silicon-germanium (SiGe) alloy, rather than just silicon. To reach such tiny geometries, self-aligned quadruple patterning (SAQR) and EUV lithography is used.

---

Intel has trouble with 10nm and IBM wants to go 7nm ..

maybe in the next 10 years

Link to comment
Share on other sites

Link to post
Share on other sites

this is awesome, but it will probably still be a few years before we see it in consumer grade products

CPU: Intel 5930K - GPU: EVGA Nvidia GTX 980Ti SSCMotherboard: Asus X-99 PRO/USB 3.1 - RAM: 32GB HyperX Savage @ 2800mhz CL14  Case: Phtanteks Eclipse P400 Tempered Glass - Cooling: Corsair H100i V2 / Fractal Design Venturi Fans Storage: PNY XLR8 120 GB SSD (OS) + Seagate 2TB HDD (Games)

Link to comment
Share on other sites

Link to post
Share on other sites

It'll probably be years before yields are good enough to put this tech into even HPC chips.

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

IBM should make consumer cpu's (that use X86 instruction), I doubt they will buy AMD though that would be awesome.

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

well they said they are skipping 20nm and going straight to finfet so thats good

Thats that. If you need to get in touch chances are you can find someone that knows me that can get in touch.

Link to comment
Share on other sites

Link to post
Share on other sites

IBM should make consumer cpu's (that use X86 instruction), I doubt they will buy AMD though that would be awesome.

 

They left the consumer space because it was dragging the company into bankruptcy. Only after they liquidated and sold all those divisions and some of their fabs did they  regain control; all because the focussed entirely on enterprise again.

 

I doubt they'll go into the consumer space again any time soon. :(

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

 

IBM should make consumer cpu's (that use X86 instruction), I doubt they will buy AMD though that would be awesome.

 

They left the consumer space because it was dragging the company into bankruptcy. Only after they liquidated and sold all those divisions and some of their fabs did they  regain control; all because the focussed entirely on enterprise again.

 

I doubt they'll go into the consumer space again any time soon. :(

 

With what license? As far as I'm aware they no longer have a x86 or x86-64 license to really work with.

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

With what license? As far as I'm aware they no longer have a x86 or x86-64 license to really work with.

 

 IBM had their own competitive PowerPC line for consumers, along with other business operations. I don't see thembuying AMD or negotiating for an x86 license.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

 IBM had their own competitive PowerPC line for consumers, along with other business operations. I don't see thembuying AMD or negotiating for an x86 license.

Yeah but PowerPC is RISC while x86 is CISC... No new OS's really support it, Pretty much most windows versions after 3.1 and 95 were completely CISC focused.  Edit: Arm uses RISC, so Windows does support it.

 

As much as it would be nice, and RISC can arguably be better in the right conditions but, without that x86 license it'll be hard to even get a presence.    Will still be hard to into the market regardless.

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah but PowerPC is RISC while x86 is CISC... No new OS's really support it, Pretty much most windows versions after 3.1 and 95 were completely CISC focused.  

 

As much as it would be nice, and RISC can arguably be better in the right conditions but, without that x86 license it'll be hard to even get a presence.  

 

I guess you could say them going back to the consumer market would be rather "RISCy".

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

I guess you could say them going back to the consumer market would be rather "RISCy".

Edited my post above, did more research into it. It's actually a little different because ARM uses RISC. 

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

14/16nm at least for desktop cpu/gpu is gonna be here for a long long time, probably by 2020..

Link to comment
Share on other sites

Link to post
Share on other sites

Just letting you guys know. Intel saying they are no longer using silicon at 7nm is basically saying they have already learned what ibm just did. (Aka germanium/silicon hybrid transistor). I have no doubts whatsoever that Intel is years ahead of everyone else at this game.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

  

IBM should make consumer cpu's (that use X86 instruction), I doubt they will buy AMD though that would be awesome.

The profit margins for the consumer space are way too low for IBM. There's a reason they stick to mainframes and scale-up server designs. The people who need those sorts of computers pay out of the nose for them, and it's the one HPC area Intel really doesn't do well in. IBM is never coming back to the consumer market, and it's not going to buy AMD. Paying GloFo to take the foundry which has been building the Power 8 chips allows even higher returns on investment.

As per these 7nm chips, they won't be mass-producible until 2019 most likely.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Man Global Foundries is really progressing fast! GloFo has always worked well with IBM, so this makes sense, but with the cooperation with Samsung as well, things are really improving fast. Seems like GloFo will be a strong competitor to Intel, and thus AMD will be able to compete on nodes once again. This is very good news indeed.

 

Although GloFo's 14nm node is not quite as small as Intel's, it's pretty damn close, and will have the same benefits. 2016 is the year AMD will be able to have same node as Intel, and with ZEN, maybe competitive on IPC/core performance again.

Intel seems to have dropped the ball with 10nm, with the introduction of Kaby Lake, as well as the delay of Broadwell. TSMC claims to focus on 10nm, and be ready in 2017. If GloFo can have 10nm by 2017, or even a functional 7 nm, things will start to get very interesting indeed.

 

People are writing off AMD for no good reason, and keep talking mumbo jumbo about it needing to be sold. I don't understand that point, as things will progress fast next year.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Man Global Foundries is really progressing fast! GloFo has always worked well with IBM, so this makes sense, but with the cooperation with Samsung as well, things are really improving fast. Seems like GloFo will be a strong competitor to Intel, and thus AMD will be able to compete on nodes once again. This is very good news indeed.

 

Although GloFo's 14nm node is not quite as small as Intel's, it's pretty damn close, and will have the same benefits. 2016 is the year AMD will be able to have same node as Intel, and with ZEN, maybe competitive on IPC/core performance again.

Intel seems to have dropped the ball with 10nm, with the introduction of Kaby Lake, as well as the delay of Broadwell. TSMC claims to focus on 10nm, and be ready in 2017. If GloFo can have 10nm by 2017, or even a functional 7 nm, things will start to get very interesting indeed.

 

People are writing off AMD for no good reason, and keep talking mumbo jumbo about it needing to be sold. I don't understand that point, as things will progress fast next year.

The idea that the other foundries will not struggle with high power 14nm/16nm the way Intel did is not grounded in historical trends. GloFo and TSMC both failed to produce a high power 20nm nose or transition to FinFET at that node the way Intel and IBM did. Samsung may have gotten a low-power 14 (closer to 19)nm FF node working, but its yields are still garbage. Apple already awarded the first major A9 contract to TSMC weeks ago. Now GloFo is building on that process for high power when it has 0 FinFET experience. Trying to do this node AND learn FinFET simultaneously is ambitious. Expecting it to go without delays to the point they catch Intel for any significant period of time I personally think is ludicrous. And GloFo and TSMC do not have nearly enough EUV scanners to do mass production of 10nm, much less 7nm. TSMC has 4. I think GloFo has 2 or 3. Intel has 15. I think people should remember why Intel got ahead on nodes in the first place and maintained that lead for so long. It's the leading foundry innovator both in technology and radical design, and IBM was the only one that could keep pace. Intel had a small delay at 22nm FF. 14nm was delayed a full year and change. ASML has said EUV is not very economically viable for mass 10nm production yet, but you think the other foundries won't remotely struggle at 14 and will magically make it to 10 first under the assumption Intel is going to drop the ball?

Betting against Intel has rarely ever worked out. Let's wait to see what reality brings forth. TSMC and GloFo both have historical problem trends. Let's not toss them halos until the Angels sprout wings.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

IBM left the consumer space for a reason, there is not enough money to be made to fuel an R&D giant like IBM. Doubt they'd buy themselves back into it.

Link to comment
Share on other sites

Link to post
Share on other sites

When until we bump up against physics and can't make the transistors any smaller though? I have always wondered about this.

Heard somewhere Intel is working on new materials that will bump up the speeds without relying on miniaturising things further, but I'm not exactly holding my breath.

I guess eventually nanotechnology will have to take over here, though I have heard little on that front in regards to computer chips.

Link to comment
Share on other sites

Link to post
Share on other sites

Lol I'm actually attending the University it was made at.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah but PowerPC is RISC while x86 is CISC... No new OS's really support it, Pretty much most windows versions after 3.1 and 95 were completely CISC focused.  Edit: Arm uses RISC, so Windows does support it.

 

As much as it would be nice, and RISC can arguably be better in the right conditions but, without that x86 license it'll be hard to even get a presence.    Will still be hard to into the market regardless.

windows rt supported arm and it quickly died off

Link to comment
Share on other sites

Link to post
Share on other sites

The idea that the other foundries will not struggle with high power 14nm/16nm the way Intel did is not grounded in historical trends. GloFo and TSMC both failed to produce a high power 20nm nose or transition to FinFET at that node the way Intel and IBM did. Samsung may have gotten a low-power 14 (closer to 19)nm FF node working, but its yields are still garbage. Apple already awarded the first major A9 contract to TSMC weeks ago. Now GloFo is building on that process for high power when it has 0 FinFET experience. Trying to do this node AND learn FinFET simultaneously is ambitious. Expecting it to go without delays to the point they catch Intel for any significant period of time I personally think is ludicrous. And GloFo and TSMC do not have nearly enough EUV scanners to do mass production of 10nm, much less 7nm. TSMC has 4. I think GloFo has 2 or 3. Intel has 15. I think people should remember why Intel got ahead on nodes in the first place and maintained that lead for so long. It's the leading foundry innovator both in technology and radical design, and IBM was the only one that could keep pace. Intel had a small delay at 22nm FF. 14nm was delayed a full year and change. ASML has said EUV is not very economically viable for mass 10nm production yet, but you think the other foundries won't remotely struggle at 14 and will magically make it to 10 first under the assumption Intel is going to drop the ball?

Betting against Intel has rarely ever worked out. Let's wait to see what reality brings forth. TSMC and GloFo both have historical problem trends. Let's not toss them halos until the Angels sprout wings.

 

Considering you have both Global Foundries, Samsung (and perhaps IBM to some extent) working on this, I really don't see this as any serious issue, compared to Intel.

 

As for 20 nm, no one has made a high performance part ever. Intel uses 22nm, so it's not the same. Not sure what Apple is doing?! Their hatred towards Samsung must be very strong to drop GloFo. What node will the A9 be on at TSMC? Their 20nm sucks, and their 16nm FF+ is not being useful for the rest of this year. Very odd indeed.

 

How do you define "garbage"? GloFo themselves says their 14nm LPE has been certified for mass production, and their LPP should be certified in the fall. Further more, all Samsung Galaxy 6 chips (Exynos 7420) are on 14nm FinFet, so it's simply not correct, when you say they have no experience with FinFet. They do, and it's fully operational.

 

As for 10nm no one in the world is mass producing anything on that node, so I'm not sure I get your point? Could you elaborate? Intel has just postponed their 10nm production, and for the first time (afaik), have skipped a step in their tick/tock strategy.

 

Pretty sure GloFO will not struggle with 14nm FF, as they are already mass producing them for phones. Of course, if we look at the high performance parts for A/C/GPU's, things might look different, but based on GloFo's own official statements, the LPP node should be certified in the fall.

Right now GloFo is turning into a giant, with the help of Samsung and IBM. You really think Intel is magically going to be better than those 3 companies combined? I think Intel will finally get a serious run for their money, and I don't think they will be able to keep up in the long run (pun intended).

 

But nothing is conclusive, that much we can agree on. GloFo has to prove they can do it, but so do Intel. Their 14nm has been delayed, has produced few CPU models, at prices that are abysmal, which sounds like big yield issues.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

IBM, please buy AMD. We need this kind of progress.

i would love to see IBM coming back to the consumer market again ... 

... Life is a game and the checkpoints are your birthday , you will face challenges where you may not get rewarded afterwords but those are the challenges that help you improve yourself . Always live for tomorrow because you may never know when your game will be over ... I'm totally not going insane in anyway , shape or form ... I just have broken English and an open mind ... 

Link to comment
Share on other sites

Link to post
Share on other sites

Considering you have both Global Foundries, Samsung (and perhaps IBM to some extent) working on this, I really don't see this as any serious issue, compared to Intel.

 

As for 20 nm, no one has made a high performance part ever. Intel uses 22nm, so it's not the same. Not sure what Apple is doing?! Their hatred towards Samsung must be very strong to drop GloFo. What node will the A9 be on at TSMC? Their 20nm sucks, and their 16nm FF+ is not being useful for the rest of this year. Very odd indeed.

 

How do you define "garbage"? GloFo themselves says their 14nm LPE has been certified for mass production, and their LPP should be certified in the fall. Further more, all Samsung Galaxy 6 chips (Exynos 7420) are on 14nm FinFet, so it's simply not correct, when you say they have no experience with FinFet. They do, and it's fully operational.

 

As for 10nm no one in the world is mass producing anything on that node, so I'm not sure I get your point? Could you elaborate? Intel has just postponed their 10nm production, and for the first time (afaik), have skipped a step in their tick/tock strategy.

 

Pretty sure GloFO will not struggle with 14nm FF, as they are already mass producing them for phones. Of course, if we look at the high performance parts for A/C/GPU's, things might look different, but based on GloFo's own official statements, the LPP node should be certified in the fall.

Right now GloFo is turning into a giant, with the help of Samsung and IBM. You really think Intel is magically going to be better than those 3 companies combined? I think Intel will finally get a serious run for their money, and I don't think they will be able to keep up in the long run (pun intended).

 

But nothing is conclusive, that much we can agree on. GloFo has to prove they can do it, but so do Intel. Their 14nm has been delayed, has produced few CPU models, at prices that are abysmal, which sounds like big yield issues.

Intel's 22nm is still more dense than the TSMC 20nm planar process, so you know what? It is equivalent, and it's high power 20nm. Intel and IBM succeeded where every other foundry failed, and it still took them extra time to learn FinFETs. If you think GloFo and TSMC aren't susceptible to this, you're deluded, especially at 14nm. There's a reason Samsung only focused on low power and tossed the high power to GloFo. Samsung knows that is going to be a huge money and time pit. It's better for a competitor to absorb that costs while appearing as the collaborative paragon. It's a PR stunt and nothing more. 

 

IBM would be helpful if GloFo was willing to push FDSOI. IBM's 22nm process was in fact FDFFSOI (Fully Depleted FinFET Silicon On Insulator), but FinFET on pure Silicon is a very different beast electrically and thermally. That's one reason the Power 8s can sustain such high clocks for 12-core chips (4.3GHz) at a 155-175W TDP. I think you greatly overestimate the effects of Samsung's collaboration. Low power is always much easier. Even TSMC got 20nm low power working.

 

16nmFF will be used for the A9. There are 2 16nm nodes. 16nmFF is roughly equivalent in density to the 20nm node (FinFETs peel about half a node of density off if we look at Intel's model) and is meant for low to medium power applications, but it has much better thermal and electrical properties. 16nmFF+ is meant for the high power applications and will actually be roughly equivalent to "16nm" density. 16nm will be due for mass production by the end of September.

 

No, GloFo is using Samsung's exact specifications for the LPE and LPP processes. GloFo has no experience of their own doing FinFET research and innovations.

 

LPP is not the node to look at. SHP is the server chip and GPU node, and SHP is going to go very badly in all likelihood.

 

GloFo has less than 1/3 the production scale of TSMC and less than half the R&D budget. Even with IBM's and Samsung's IP, GloFo has a horrible track record, and we'll see, but I have very good reason to doubt them.

 

Intel has been untouchable for more than a decade in foundry tech. The likelihood that changes is the same likelihood that money fails to buy success in the U.S..

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

i would love to see IBM coming back to the consumer market again ... 

It's way too low margin for them.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×