Jump to content

AMD confirms 4GB limit for first HBM graphics card

Samfisher

thats next year, Nvidia descided to use HBM alot later than AMD, Basically AMD backed HBM and Nvidia bet on HMC, turns out HBM is the more viable tech so they are behind the 8-ball here

HBm is not remotely more viable. HMC is already in production and deployment. Intel just has such a huge volume order Nvidia would be stuck waiting a long while. With 8 HMC modules per Knight's Landing package and 16 per PCIe riser card along with actual system memory module sticks being ordered for next-gen supercomputers, Nvidia had to switch to get a decent time of release. HMC has superior specs to HBM as well.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

IF this is true, a $850 USD MSRP is going to be hard to justify. Yes, it will probably still perform insanely well, but for what? VR is going to be pushing 5 Million pixels per frame, which is more than 1440p. Current AAA games and even some indy games at 1440p chew through Vram. 

 

If true, than Fiji = Stop Gap until 14nm.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

HBm is not remotely more viable. HMC is already in production and deployment. Intel just has such a huge volume order Nvidia would be stuck waiting a long while. With 8 HMC modules per Knight's Landing package and 16 per PCIe riser card along with actual system memory module sticks being ordered for next-gen supercomputers, Nvidia had to switch to get a decent time of release. HMC has superior specs to HBM as well.

 

my comment still stands, i didnt know about intel but if nvidia descided earlier they might have had HBM Sooner, because waiting till 2017 for HMC would hurt them to much financially, although HCm is more expensive isnt it?

Processor: Intel core i7 930 @3.6  Mobo: Asus P6TSE  GPU: EVGA GTX 680 SC  RAM:12 GB G-skill Ripjaws 2133@1333  SSD: Intel 335 240gb  HDD: Seagate 500gb


Monitors: 2x Samsung 245B  Keyboard: Blackwidow Ultimate   Mouse: Zowie EC1 Evo   Mousepad: Goliathus Alpha  Headphones: MMX300  Case: Antec DF-85

Link to comment
Share on other sites

Link to post
Share on other sites

HBm is not remotely more viable. HMC is already in production and deployment. Intel just has such a huge volume order Nvidia would be stuck waiting a long while. With 8 HMC modules per Knight's Landing package and 16 per PCIe riser card along with actual system memory module sticks being ordered for next-gen supercomputers, Nvidia had to switch to get a decent time of release. HMC has superior specs to HBM as well.

 

I don't know if it's a coincidence or you are just doing it on purpose but on every post related to AMD or some of their technology you are there to trash them in some way and praise Nvidia and Intel.

 

Viable = capable of working successfully; feasible. (by google), so by going by that definition one could really say that HBM, memory that was designed for GPU's and is available now, in 2015, is more viable at this time than HMC which is more expensive, starts production in 2016 so it won't be here any time soon, not that much faster than HBM and looking at some slides, it appears that HMC draws a bit more power than HBM but nothing drastic.

 

I'm not here to put down Nvidia as they have some great products out there and they deserve respect for it but this time it didn't work out for them with HMC and AMD made a bet on the right thing and they also deserve some respect.

Here is a good article on the subject:

http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube

 

fixed the source page

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know if it's a coincidence or you are just doing it on purpose but on every post related to AMD or some of their technology you are there to trash them in some way and praise Nvidia and Intel.

 

Viable = capable of working successfully; feasible. (by google), so by going by that definition one could really say that HBM, memory that was designed for GPU's and is available now, in 2015, is more viable at this time than HMC which is more expensive, starts production in 2016 so it won't be here any time soon, not that much faster than HBM and looking at some slides, it appears that HMC draws a bit more power than HBM but nothing drastic.

 

I'm not here to put down Nvidia as they have some great products out there and they deserve respect for it but this time it didn't work out for them with HMC and AMD made a bet on the right thing and they also deserve some respect.

Here is a good article on the subject:

http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube/2 

This isn't trashing AMD. This is me saying a user's claim about viability of tech is absolutely baseless.

 

HMC is already in production AND deployment in the HPC space. Its first integration on an accelerator is Intel's Knight's Landing, but it's already being integrated into server memory stacks at higher bandwidth and lower latency than HBM.

 

AMD was also late to the party. HMC was an idea in planning for 2 quarters before AMD even began talks with Hynix. All I'm saying here is AMD deserves no praise. I'm not saying they deserve ridicule, but this is getting out of hand. Some people on this forum worship the ground their favorite company(s) walk on.

 

The source page does not exist. (You have a hidden character in your URL). These specs aren't remotely up to date, and in fact they're the preliminary figures from early 2013 testing. HMC is 2.4x the bandwidth of HBM 1.0, and it's being deployed for exotic new motherboards where up to 8 CPUs share a single memory bank but do so using a transactional system for super low latency and very high bandwidth, so yes, power usage is a bit higher. Micron also decided to forego 2.5D and went straight for 3D with no compromises. That of course led to difficulty with getting matured TSV tech and working with IBM on the cooling solution (inter-chip non-conducting metalloid phase change), leading to a later time to market, but a much more polished product.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know if it's a coincidence or you are just doing it on purpose but on every post related to AMD or some of their technology you are there to trash them in some way and praise Nvidia and Intel.

 

Viable = capable of working successfully; feasible. (by google), so by going by that definition one could really say that HBM, memory that was designed for GPU's and is available now, in 2015, is more viable at this time than HMC which is more expensive, starts production in 2016 so it won't be here any time soon, not that much faster than HBM and looking at some slides, it appears that HMC draws a bit more power than HBM but nothing drastic.

 

I'm not here to put down Nvidia as they have some great products out there and they deserve respect for it but this time it didn't work out for them with HMC and AMD made a bet on the right thing and they also deserve some respect.

Here is a good article on the subject:

http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube/2 

 

Don't mind him, he criticizes everything AMD, and boasts everything Intel, calling it blue dragon and weird stuff.

 

Either way, based on that article, HBM, definitely looks to be better than HMC. Maybe that is why NVidia left that for HBM instead. HBM is co developed by AMD, so they will have exclusive rights to it for a period (I heard 1 generation of GFX). Either way, we will get HBM in a month or two. HMC is nowhere to be seen for any consumer anytime soon.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

my comment still stands, i didnt know about intel but if nvidia descided earlier they might have had HBM Sooner, because waiting till 2017 for HMC would hurt them to much financially, although HCm is more expensive isnt it?

HMC is more expensive, but it's the more advanced solution and is a far more robust technology. Micron decided to go 3D outright with no 2.5D 1st gen. It's also a fully transactional memory built both for extremely low latency (CAS latency equivalent of 3 running at 1000MHz) and extremely high bandwidth (2.4x HBM 1.0 per pin) which Micron intends for deployment both as embedded memory on a motherboard, for JEDEC DRAM replacement (already starting to work a bit in the HPC space), and for accelerator (GPU, FPGA, Xeon Phi) memory. It's also 2x as dense per stack relative to HBM 1.0 at the outset.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Don't mind him, he criticizes everything AMD, and boasts everything Intel, calling it blue dragon and weird stuff.

 

Either way, based on that article, HBM, definitely looks to be better than HMC. Maybe that is why NVidia left that for HBM instead. HBM is co developed by AMD, so they will have exclusive rights to it for a period (I heard 1 generation of GFX). Either way, we will get HBM in a month or two. HMC is nowhere to be seen for any consumer anytime soon.

You didn't even read the article. Extremetech says the page cannot be found. You also have read nothing of Micron's specifications for it. It's far more powerful than Hynix's HBM.

 

Edit, there's a hidden character in the forum link. Delete everything up until the /2

 

This article is also way outdated using preliminary specs from the HMC consortium from 2013 preliminary testing. It's not worth quoting.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Don't know how I feel about this, some games just required more than 4GB running at higher resolutions, pretty sure if you ran GTA V at 4k and max settings as you expect a card like this to do would go over the 4GB. Could be due to the cost? I remember seeing somewhere it costs considerably more than GDDR5

 

Yeah, GTA V can use up to 3.3 GB of my 970's VRAM with no MSAA and most advanced settings turned off.

Link to comment
Share on other sites

Link to post
Share on other sites

even tho its only 4gb would it be better than regular 4gb of gddr5? kinda like how 1gb of gddr5 was better than 2gb of gddr3?

cpu:i7-4770k    gpu: msi reference r9 290x  liquid cooled with h55 and hg10 a1     motherboard:z97x gaming 5   ram:gskill sniper 8 gb

Link to comment
Share on other sites

Link to post
Share on other sites

even tho its only 4gb would it be better than regular 4gb of gddr5? kinda like how 1gb of gddr5 was better than 2gb of gddr3?

GDDR3 actually did choke bandwidth for games. GDDR5 does not. Without the same bottleneck alleviation, AMD won't get the sort of gains needed for such a venture to work the way you imply.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You didn't even read the article. Extremetech says the page cannot be found. You also have read nothing of Micron's specifications for it. It's far more powerful than Hynix's HBM.

 

I would think a programmer, and self proclaimed genius like you, should easily be able to delete the extra added characters in the link, that makes it not work.

 

But the link says HMC is up to 240 GBsp, where HBM is 256 GBps. True HMC is more power efficient, but it is also more expensive.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I would think a programmer, and self proclaimed genius like you, should easily be able to delete the extra added characters in the link, that makes it not work.

 

But the link says HMC is up to 240 GBsp, where HBM is 256 GBps. True HMC is more power efficient, but it is also more expensive.

I just discovered that, and the article is quoting preliminary performance data from 2013. The finished product is 2.6x this, or 624Gbps, as being deployed right now. It's also a much more advanced product all the way around, such as much lower latency (CAS of 3), fully transactional interface, and capable of being independently accessed by 4 processors simultaneously without buffering. It's a truly enterprise solution meant for deployment across a range of products from accelerators to embedded memory to DRAM sticks of up to 256GB each with Gen 1 to directly compete with JEDEC and DDR4.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If the bandwidth of HBM is so ridiculously high, 4GB may not be so low given the potential for a high turnover rate for texture loading and stuff.

CPU -AMD R5 2600X @ 4.15 GHz / RAM - 2x8Gb GSkill Ripjaws 3000 MHz/ MB- Asus Crosshair VII Hero X470/  GPU- MSI Gaming X GTX 1080/ CPU Cooler - Be Quiet! Dark Rock 3/ PSU - Seasonic G-series 550W/ Case - NZXT H440 (Black/Red)/ SSD - Crucial MX300 500GB/ Storage - WD Caviar Blue 1TB/ Keyboard - Corsair Vengeance K70 w/ Red switches/ Mouse - Logitech g900/ Display - 27" Benq GW2765 1440p display/ Audio - Sennheiser HD 558 and Logitech z323 speakers

Link to comment
Share on other sites

Link to post
Share on other sites

If the bandwidth of HBM is so ridiculously high, 4GB may not be so low given the potential for a high turnover rate for texture loading and stuff.

If you're using more than 4GB of memory, that means you have to go back to system RAM if it isn't in your HBM. Doesn't matter how fast your HBM is if your system memory sucks balls by comparison.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

And here I thought stacking memory was going to make it possible to connect more modules to the gpu and improve speeds and bus width. and I also thought AMD would capitalize on this, so a faster than a Titan x, 4gb gpu for $850? Xfire 290's are still going to be attractive.

Spoiler

Corsair 400C- Intel i7 6700- Gigabyte Gaming 6- GTX 1080 Founders Ed. - Intel 530 120GB + 2xWD 1TB + Adata 610 256GB- 16GB 2400MHz G.Skill- Evga G2 650 PSU- Corsair H110- ASUS PB278Q- Dell u2412m- Logitech G710+ - Logitech g700 - Sennheiser PC350 SE/598se


Is it just me or is Grammar slowly becoming extinct on LTT? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip

 

Look, I'm saying that at the moment HBM is a more viable option for GPUs because of the things I've stated earlier. You can't say that a better path would be HMC when you can't even get to it at the moment and even if you did the price would be too much for regular consumers. Knights Landing is going to cost an arm and a leg I would guess so it having more expensive memory is not a problem because it's consumers are not your regular Joe who wants to do some gaming and if Nvidia and AMD were to wait for it we would not see it before 2017 for sure. By that time who is to say HBM won't improve. If you could share a link for bandwidth comparison between those two I would like to read it as Extreme Tech article is from January this year.

 

I think that HMC and HBM have a very different place in the industry at this time so comparing them by spec is not the only thing that matters.

 

Also, I'm talking about memory and its availability here, not praising or trashing any company. 

Link to comment
Share on other sites

Link to post
Share on other sites

And here I thought stacking memory was going to make it possible to connect more modules to the gpu and improve speeds and bus width. and I also thought AMD would capitalize on this, so a faster than a Titan x, 4gb gpu for $850? Xfire 290's are still going to be attractive.

Having a large (giant) all-silicon interposer for the GPU die and memory to sit on all connected was going to cost money. How people could not see this coming baffles me. Also, AMD needs money, and a lot of it, before the end of 2018 going through 2019 when all 2.4 billion left of its debt all comes due over just 5 quarters. AMD needs margins, so it can engage in a price war, but it needs to start high enough to get enough margins to live.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This is bad, really bad. :( if the Fiji HBM card is going to be around 850$ with only 4gb many will turn away from it to the 980ti (probably 6gb). Unless there's an 8gb gddr5 option or an oem makes an 8gB... Please no.

- snip-

Link to comment
Share on other sites

Link to post
Share on other sites

Look, I'm saying that at the moment HBM is a more viable option for GPUs because of the things I've stated earlier. You can't say that a better path would be HMC when you can't even get to it at the moment and even if you did the price would be too much for regular consumers. Knights Landing is going to cost an arm and a leg I would guess so it having more expensive memory is not a problem because it's consumers are not your regular Joe who wants to do some gaming and if Nvidia and AMD were to wait for it we would not see it before 2017 for sure. By that time who is to say HBM won't improve. If you could share a link for bandwidth comparison between those two I would like to read it as Extreme Tech article is from January this year.

 

I think that HMC and HBM have a very different place in the industry at this time so comparing them by spec is not the only thing that matters.

 

Also, I'm talking about memory and its availability here, not praising or trashing any company. 

HMC isn't that much more expensive, but its availability (like TSMC 16nm FF) is dictated by 1 other company, the biggest client: Intel (Apple for TSMC). This wikipedia article is old, but preliminary specs put HMC at 320GBps bandwidth per stack with an 8-high (8-link*16lanes*10Gbps full duplex signaling). http://en.wikipedia.org/wiki/Hybrid_Memory_Cube

 

The weaker stuff already started shipping and being deployed long ago too. http://www.extremetech.com/computing/167368-hybrid-memory-cube-160gbsec-ram-starts-shipping-is-this-the-technology-that-finally-kills-ddr-ram

 

And for Knight's Landing, a more energy efficient, 2-high stack will be used (8 stacks) which is between 50 and 60GBps, multiplied out for 400+GBps. Of course in this instance HMC (MCDRAM) is being used for a dynamic platform which can use it as system memory, cache, or a hybrid, and the bandwidth and other properties will change depending on the mode you run in. http://www.theplatform.net/2015/04/28/thoughts-and-conjecture-on-knights-landing-near-memory/

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Few problems with this article.

 

1) There is at no point in the article where AMD says they are limited to 4GB of VRAM with HBM

 

2) In the slide itself it doesn't say anything about being limited to 4GB of HBM.

 

3) The article they link using that as an example for their reasoning claiming it will be limited to 4GB contains information that has been presented by Hynix multiple times already. They even use a phone call with the CTO who says outright, "You're not limited in this world to any number of stacks, but from a capacity point of view, this generation-one HBM, each DRAM is a two-gigabit DRAM, so yeah, if you have four stacks you're limited to four gigabytes. You could build things with more stacks, you could build things with less stacks."

 

3) There was a confirmation not too long ago that Fiji will indeed have 8GB of HBM:

 

"AMD manages to integrate 8GB of HBM memory at its peak thanks to a series of improvements to the interposer TSV developed with SK Hynix: the technology in question is called "Dual Link Interposer" (from 4 to 8-Hi-Hi). With a Dual Link Interposing design, SK Hynix will be able to stack 4x (Dual 1GB HBM modules) via an Interposer (2.5D stacking).This will allow AMD to take advantage of a larger amount of memory without having to wait for the arrival of the HBM 2nd generation."

 

http://linustechtips.com/main/topic/355737-r9-390x-achieves-8gb-hbm-with-dual-link-interposer-not-hbm2/

 

Also, if the Hawaii refresh is going to have 8GB of VRAM, how would it be possible for Fiji to only have 4GB. I'll tell you, it wouldn't make any sense. Why would AMD have a Titan like card (their Fiji) as the top tier with only 4GB of VRAM then their card that is a tier under that (Hawaii refresh) has 8GB of VRAM? Makes no sense. Something isn't adding up.

Link to comment
Share on other sites

Link to post
Share on other sites

HBM-6.png

 

Currently, those chips can only be stacked four dies high. With each die limited to 2Gbit, each stack is one gigabyte. AMD's current design only allows for four stacks around the GPU, limiting the system to four gigabytes.

 

 

 

 

Interesting, and quashes all rumours of a Titan X-class amount of VRAM.

 

With some quick napkin math and that picture it shows 4 stacks of memory. Each stack is limited to 1 Gb true so 4 GB. But with the size and power consumption drops what is stopping them from changing their design to add 2 or 4 more stacks for a 390X vs 390 or however they decide to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

 With a Dual Link Interposing design, SK Hynix will be able to stack 4x (Dual 1GB HBM modules) via an Interposer (2.5D stacking).This will allow AMD to take advantage of a larger amount of memory without having to wait for the arrival of the HBM 2nd generation."

 

http://linustechtips...poser-not-hbm2/

Like the linked article says in the quoted post.

 

Dual1GB = 2GB

 

4 x Dual1GB = 8GB effective

Link to comment
Share on other sites

Link to post
Share on other sites

Don't know how I feel about this, some games just required more than 4GB running at higher resolutions, pretty sure if you ran GTA V at 4k and max settings as you expect a card like this to do would go over the 4GB. Could be due to the cost? I remember seeing somewhere it costs considerably more than GDDR5

 

if I set everything to very high/ultra and turn off TXAA or MSAA I use all of my 4gb and get hangs and pop-ins in GTA5...

 

in TW3 if I disable hairworks at 4k im only using 2.6gb of RAM, with hairworks on its about 3.2gb - that's with settings ranging from medium to ultra, depending on setting, but im only getting 30 fps with hairworks on.

Sim Rig:  Valve Index - Acer XV273KP - 5950x - GTX 2080ti - B550 Master - 32 GB ddr4 @ 3800c14 - DG-85 - HX1200 - 360mm AIO

Quote

Long Live VR. Pancake gaming is dead.

Link to comment
Share on other sites

Link to post
Share on other sites

I get near 3.5 on GTA V on 1080 (triple monitor or single) with everything turned up, so yeah, 4k probably gonna be more than this can handle.

I feel like GTA V is a bit of an aberration, Witcher 3 doesn't use anywhere near as much VRAM at all.

I'm talking <2 GB at 1080p max settings, and that game is unquestionably better looking than GTA V

Specs: 4790k | Asus Z-97 Pro Wifi | MX100 512GB SSD | NZXT H440 Plastidipped Black | Dark Rock 3 CPU Cooler | MSI 290x Lightning | EVGA 850 G2 | 3x Noctua Industrial NF-F12's

Bought a powermac G5, expect a mod log sometime in 2015

Corsair is overrated, and Anime is ruined by the people who watch it

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×