Jump to content

AMD suggests 16NM finfet chiprs aren't comming in 2015

XTankSlayerX

Your perception on HDL is completely misled. It's not used to speed up the design process it's designed to reduce both area and power consumption. AMD even officially states it being equivalent to a full node improvement in one of their older slides. While the concept is rumored to hinder high frequencies you should still gain the same IPC node improvements (shorter circuit).

 

AMD has already officially stated that Excavator cores were shrunken down by 23% from Steamroller with a 40% power consumption reduction. To throw even more into our previous discussion (since Carrizo is being mentioned) Carrizo is 244.62 mm² with 3.1 billion transistors on 28 nm. That's a whopping 12.6 million transistors per mm². AMD is quite good a maximizing utilization of die space and proves you don't need to follow the trend of using smaller nodes to reduce power consumption. Even if AMD does stay on 28 nm with Zen it's not going to hold them back from competing with Intel on a performance per watt basis. Which could prove disastrous for Intel if AMD can manage to get Zen to consume less power than Skylake on a much higher node.

http://en.wikipedia.org/wiki/Hardware_description_language

 

You do know we have 2-different HDLs being used, right? High density libraries were used in the shrink from 32 nm to 28nm. Carrizo doesn't get a 2nd dose of it since we remain on the 28nm node. Carrizo was designed with the assistance of a Hardware Description Language program which sacrifices some IPC in refined human designs for the benefits of very quickly getting a product to market. This is what I was referring to and what AMD has referred to in its briefs on Excavator.

 

Also, Intel is already proving it can beat ARM in perf/watt if it tries. Once the Skylake-based Atom comes out (the chip going into Google Glass 2) ARM is going to be toast. Also, what's to stop Intel from using a high density library? It's just one more ace Intel has up its sleeve down the road if AMD becomes a serious threat.

 

Furthermore, that transistor count is from the rumor mill.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

http://en.wikipedia.org/wiki/Hardware_description_language

 

You do know we have 2-different HDLs being used, right? High density libraries were used in the shrink from 32 nm to 28nm. Carrizo doesn't get a 2nd dose of it since we remain on the 28nm node. Carrizo was designed with the assistance of a Hardware Description Language program which sacrifices some IPC in refined human designs for the benefits of very quickly getting a product to market. This is what I was referring to and what AMD has referred to in its briefs on Excavator.

 

Also, Intel is already proving it can beat ARM in perf/watt if it tries. Once the Skylake-based Atom comes out (the chip going into Google Glass 2) ARM is going to be toast. Also, what's to stop Intel from using a high density library? It's just one more ace Intel has up its sleeve down the road if AMD becomes a serious threat.

 

Furthermore, that transistor count is from the rumor mill.

Standard cell libraries are like a custom framework for a hardware description language. You can actually look around and find quite a few variations of them. AMD is using HDL (High-Density Libraries) for optimizing their architecture for reduced space and power consumption. It's something only their GPU architectures have been using until it was derived for CPU implementation. I would assume the cell library that AMD is using was developed independently. Steamroller didn't see any HDL love tho most people like to believe it did. Kaveri's die shrink was more than likely due to switching from VLIW4 to GCN which was already optimized for a 28 nm node. Excavator will be the first CPU architecture of AMD's to use HDL. Hence why the massive shrink in size (23%) and power consumption (40%). The use of hardware description languages is nothing new to the market. Intel has been using standard cell libraries since the late 1980's for their architectures. AMD is stepping up and using high density libraries for their CPU architecture and it's showing promising improvements (UHD is a possibility as well Ultra High Density). The use of them isn't going to impact IPC none the architecture is still hand drawn and then ran through their cell library afterwards to reduce size and power consumption (which is outlined on the previous slide).

 

You also have to keep in mind AMD is leveraging ARM now. There's other competitors in the market besides ARM for Intel to fear. In fact AMD is rolling out K12 in 2016 on FinFET. So it's going to be extremely hard for Intel to just grab the low power market and run with it. Right now Intel doesn't have much leg room in the low power market as architectures like the A8X and Project Denver completely obliterates the Intel Atom. Intel could already be using high density libraries or not we don't know.

 

If you're referring to the transistor count of Carrizo then no, that's official word directly from AMD.

Link to comment
Share on other sites

Link to post
Share on other sites

Standard cell libraries are like a custom framework for a hardware description language. You can actually look around and find quite a few variations of them. AMD is using HDL (High-Density Libraries) for optimizing their architecture for reduced space and power consumption. It's something only their GPU architectures have been using until it was derived for CPU implementation. I would assume the cell library that AMD is using was developed independently. Steamroller didn't see any HDL love tho most people like to believe it did. Kaveri's die shrink was more than likely due to switching from VLIW4 to GCN which was already optimized for a 28 nm node. Excavator will be the first CPU architecture of AMD's to use HDL. Hence why the massive shrink in size (23%) and power consumption (40%). The use of hardware description languages is nothing new to the market. Intel has been using standard cell libraries since the late 1980's for their architectures. AMD is stepping up and using high density libraries for their CPU architecture and it's showing promising improvements (UHD is a possibility as well Ultra High Density). The use of them isn't going to impact IPC none the architecture is still hand drawn and then ran through their cell library afterwards to reduce size and power consumption (which is outlined on the previous slide).

You also have to keep in mind AMD is leveraging ARM now. There's other competitors in the market besides ARM for Intel to fear. In fact AMD is rolling out K12 in 2016 on FinFET. So it's going to be extremely hard for Intel to just grab the low power market and run with it. Right now Intel doesn't have much leg room in the low power market as architectures like the A8X and Project Denver completely obliterates the Intel Atom. Intel could already be using high density libraries or not we don't know.

If you're referring to the transistor count of Carrizo then no, that's official word directly from AMD.

Source? Also, last I checked Atom still crushes the K1 for all CPU-related tasks even if the iGPU is crap, and Intel is pushing into the low power market faster and easier than ARM could have ever imagined. ARM's designs have also required more power and heat envelopes to improve throughput the last 3 years, and that's not going to stop.

As per Atom's iGPU, that was based on gen 7 which had problems, and Broadwell is on gen 8 under a process shrink which can fit 20% more cores into the same space which improving throughput per core by 40% by reducing the number managed by each subslice from 10 to 8. So far, the benches of Core M support this too. For a 30% decrease in clock speeds and a super tight thermal envelope (which Lenovo still didn't properly cool despite Intel being proved to deliver a 3.5W chip exactly as specified in light of Lenovo's accusations to the contrary), Intel made massive improvements, and that's now to the point their iGPU is choked by low-bandwidth LPDDR.

Betting against Intel is still foolhardy until Zen lands. There's also a very limited supply of software compiled to ARM, and the existing compilers for ARM are still meh without buying the $800 one from ARM holdings.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×