Jump to content

AMD Greenland GPU 2016 to feature 14nm node

CoolaxGaming

I don't think u know how it works..

When Fiji xt will come out, it will replace/suceed the 290x. The 290x will step down from being flagship and the 390x will come in. If gereenland comes out, it will succeed/replace the 390x.

And all the articles say HBm2 and 14nm which is a step up from HBM1 AND 28NM

HOW is this false info?

I know full well what I'm talking about, nothing of what you mentioned above is false. What is false however is your statement that "Greenland gpu will be a Fiji XT with memory and node improvements"

Greenland isn't Fiji, they're two different GPUs.  Just like Hawaii (290 series) and Tahiti (280 series) are two different GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting that the GPUs named after the largest island in the world is just a shrink.

Right? :D

And this is good news for customers: 2016 is gonna be a great year.

Why is SpongeBob the main character when Patrick is the star?

Link to comment
Share on other sites

Link to post
Share on other sites

It wouldn't remotely surprise me if both companies did a tick-rock to smooth over yield problems and risks associated with a new node. You'd think after years of Intel and IBM doing that others would take a hint. It would be better for AMD's bottom line and better for consumers, as you will get more performance as more resources can be fit into the same die area.

 

It's hard to coordinate a tick-tock schedule when you aren't integrated top to bottom. AMD and Nvidia have limited control over what happens at GloFo, TSMC etc.

Link to comment
Share on other sites

Link to post
Share on other sites

It's hard to coordinate a tick-tock schedule when you aren't integrated top to bottom. AMD and Nvidia have limited control over what happens at GloFo, TSMC etc.

It's not difficult at all. Always plan new architectures for the current node and plan to tweak the architecture for a process shrink. Since GPUs are far more brute force processors than CPUs, it's actually far easier.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's not difficult at all. Always plan new architectures for the current node and plan to tweak the architecture for a process shrink. Since GPUs are far more brute force processors than CPUs, it's actually far easier.

 

AMD and Nvidia are not the ones running the fabs, so they cannot control when the new process nodes are ready. So they can't easily do a regular tick-tock cadence like Intel does.

Link to comment
Share on other sites

Link to post
Share on other sites

I know full well what I'm talking about, nothing of what you mentioned above is false. What is false however is your statement that "Greenland gpu will be a Fiji XT with memory and node improvements"

Greenland isn't Fiji, they're two different GPUs.  Just like Hawaii (290 series) and Tahiti (280 series) are two different GPUs.

+1 the OP is misleading, makes it appear as though Grenland is just a die shrink of Fiji when it's a brand new GPU featuring an entirely new graphics architecture code named Arctic Islands.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think u know how it works..

When Fiji xt will come out, it will replace/suceed the 290x. The 290x will step down from being flagship and the 390x will come in. If gereenland comes out, it will succeed/replace the 390x.

And all the articles say HBm2 and 14nm which is a step up from HBM1 AND 28NM

HOW is this false info?

No dude when you say "Greenland gpu will be a Fiji XT with memory and node improvements" it's like saying Pascal is a die shrink of Maxwell GM204 when they're two different GPU cores featuring two different GPU architectures.

Please fix that.

CPU : i5 3570K @ 4.5Ghz. GPU : MSI Lightning GTX 770 @ 1300mhz. 16GB 1600mhz RAM

Link to comment
Share on other sites

Link to post
Share on other sites

It won't make any difference, because Nvidia will almost certainly have access to the same process node at the same time.

 

AMD doesn't need to be ahead to stay alive, they just need to not be behind

Are you sure?

Nvidia and Samsung aint the best of budies right now...

I'd love it if Samsung floated AMD to victory, and left nvidia behind for a little while

Specs: 4790k | Asus Z-97 Pro Wifi | MX100 512GB SSD | NZXT H440 Plastidipped Black | Dark Rock 3 CPU Cooler | MSI 290x Lightning | EVGA 850 G2 | 3x Noctua Industrial NF-F12's

Bought a powermac G5, expect a mod log sometime in 2015

Corsair is overrated, and Anime is ruined by the people who watch it

Link to comment
Share on other sites

Link to post
Share on other sites

28nm to 14nm is a massive jump

 

This may save AMD

While it's a massive jump it won't change their situation as Nvidia is also going 16/14nm next year with Pascal.

And Nvidia is not just doing the shrink but also adding Unified Virtual Memory, 3D vram, NV Link, and Mixed Precision.

The only thing that can save AMD is changing their marketing and putting more effort in their drivers because needing to download beta drivers for the newest games is not OK especially when the competition has WQHL drivers ready on release.(GTA V drivers that are out right now as an example.)

And they really need to remake the stock coolers that mess up the reviews and give people a bad experience.

 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD and Nvidia are not the ones running the fabs, so they cannot control when the new process nodes are ready. So they can't easily do a regular tick-tock cadence like Intel does.

Yes they can. Plan new architectures for the current node and if a new one comes along plan to simply shrink the current architecture. Planning new for the next mode has gotten AMD and Nvidia burned many times now. It's perfectly possible and very reasonable to implement a tick-rock cadence even if it's not a true metronome. You may get a few rocks before a tick, but that tick will be far less expensive than before:

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

interesting. I doubt 14nm will provide much performance increase over 28nm in 2016, considering FinFet (aka tri-gate) transistors, and the power requirements to feed transistors at such a density. My money is on the 2nd Generation of 14nm GPU's in 2017 being the next big step in performance for both AMD and Nvidia.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Sadly, that doesn't mean much. If they were to actually profit off those GPU's with a regular margin like NVidia then they would all be 15% more expensive than they currently 

 

Doesn't mean that they aren't making money off of it.

 

The margins are always thin on consoles.  But sales are really strong.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

AMD doesn't need to be ahead to stay alive, they just need to not be behind

I'd love it if Samsung floated AMD to victory, and left nvidia behind for a little while

 

AMD is not behind Nvidia on the process node. They are both using 28nm TSMC, and AMD actually got to that process node before Nvidia.

 

Yes they can. Plan new architectures for the current node and if a new one comes along plan to simply shrink the current architecture. Planning new for the next mode has gotten AMD and Nvidia burned many times now. It's perfectly possible and very reasonable to implement a tick-rock cadence even if it's not a true metronome. You may get a few rocks before a tick, but that tick will be far less expensive than before:

 

You can't just change those plans around like that. You can't just design a GPU that would be 600 mm2 with the current process node but 150 mm2 with the new process node.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD is not behind Nvidia on the process node. They are both using 28nm TSMC, and AMD actually got to that process node before Nvidia.

 

 

You can't just change those plans around like that. You can't just design a GPU that would be 600 mm2 with the current process node but 150 mm2 with the new process node.

Yes you can. AMD did exactly that from 32 to 28nm. It can be done and should be. It doesn't matter when the process shrink comes, because any architecture can be scaled down. The only difference arrival time will make is which architecture ends up being implemented twice.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yes you can. AMD did exactly that from 32 to 28nm. It can be done and should be. It doesn't matter when the process shrink comes, because any architecture can be scaled down. The only difference arrival time will make is which architecture ends up being implemented twice.

 

No it can't. It took AMD too long to make that half-node jump (it's not a full die shrink, that would have been 32nm to 22nm like Intel with Sandy to Ivy Bridge). They launched 32nm APUs in 2011, now it's 2015 and they are on 28nm APUs, with no new process in sight (Carrizo is 28nm). Half a node in 4 years. That's one node in 8 years. Have fun with that 8-year tick-tock schedule.

Link to comment
Share on other sites

Link to post
Share on other sites

No it can't. It took AMD too long to make that half-node jump (it's not a full die shrink, that would have been 32nm to 22nm like Intel with Sandy to Ivy Bridge). They launched 32nm APUs in 2011, now it's 2015 and they are on 28nm APUs, with no new process in sight (Carrizo is 28nm). Half a node in 4 years. That's one node in 8 years. Have fun with that 8-year tick-tock schedule.

They plan on moving to 14nm FinFET with Zen. AMD doesn't have their own foundry anymore so expecting a tick/tock cycle from them is simply too much. I personally don't have a problem with it as they are maximizing their architecture design to conserve on power before jumping to a lower node. A prime example would be Carrizo, you know how stupid impressive it would be on 14nm FinFET. The FX-8800P is already a 15w chip that performs around the mobile Haswell i3 except with a much stronger iGPU. We'd be looking at A10-7850k performance out of a ~10w chip if Carrizo made that jump.

Link to comment
Share on other sites

Link to post
Share on other sites

Well and some memory improvements.

That may explain why its taking so long.  14nm was not ready last year, but could be ready before september when they release it. Putting them ahead of nvidia

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×