Jump to content

Introducing GDDR5X: Higher Density & Faster Data Rates

HKZeroFive

With the adoption of 4K monitors on the rise, and the release of consumer-grade virtual reality just around the corner, the need for larger amounts of faster memory has never been so high. Graphics cards have been released with ever-increasing frame buffers year over year to accommodate high resolutions. Graphics cards today come equipped with as much as 8 GB, and in some cases even 12 GB of memory.

With today's announcement, Micron has released memory chips with 8 Gb density, effectively doubling the maximum memory capacity of upcoming graphics cards and other products that use GDDR5 memory, such as game consoles.

In addition to launching higher density GDDR5 modules, Micron also hinted at what's to come in 2016. Currently, 7 Gbps GDDR5 is what we see on many graphics cards offered today. Micron said that 8 Gbps modules are currently in production, but it sees this as the absolute peak for GDDR5 in its current form. The company said it observed that command address protocoling and array speed were the two limiting factors, while the interface had plenty of additional headroom.

 

gddr5x_w_600.png

 

In order to surpass the 8 Gbps barrier of GDDR5, without completely building a new memory technology from the ground up, Micron doubled the pre-fetch of GDDR5 from eight data words for each memory access, to 16 data words per access. Doing this resulted in 10 to 12 Gbps on the first generation, and the company expects to be able to surpass 14 Gbps, and perhaps reach 16 Gbps and beyond as the technology is refined.

Micron will make a formal announcement about this new memory technology some time in 2016, but what we know so far is that the company is calling it "GDDR5X," and it will be significantly faster than current offerings. The company wanted to make adoption as simple as possible, so it retained as much of the GDDR5 commands and protocols as possible. Additionally, Micron is not keeping this as a proprietary option, and has instead approached JEDEC to make GDDR5X a widely available standard.

8 Gb GDDR5 modules are available now for hardware manufacturers. Micron expects availability of products using GDDR5X to start hitting the market in the second half of 2016.

Don't expect this to overtake or challenge HBM, this is more suited for keeping the performance up in lower to mid end cards, so most likely in the range of the x60 cards in NVIDIA and the x70 cards in AMD, and for the re released consoles (Slim PS4 for example). The main advantage of this card is that it has a smaller footprint/lower power consumption. The question is, where will they draw the line when deciding which product will have GDDR5X?

I'd like to know your thoughts.

Sauce

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's not something I really care about - GTX *60 and R9 *70 cards don't really need much more added to the memory, except size. Those cards aren't really that power hungry or meant for heavy duty use like 4K or anything so it's pointless.

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

It's not something I really care about - GTX *60 and R9 *70 cards don't really need much more added to the memory, except size. Those cards aren't really that power hungry or meant for heavy duty use like 4K or anything so it's pointless.

... "New tech" just like 4K is going to be a "New standard"... It's gonna need to be possible to play 4K on *60 and *80/*70 cards soon...

Just remember: Random people on the internet ALWAYS know more than professionals, when someone's lying, AND can predict the future.

i7 9700K (5.2Ghz @1.2V); MSI Z390 Gaming Edge AC; Corsair Vengeance RGB Pro 16GB 3200 CAS 16; H100i RGB Platinum; Samsung 970 Evo 1TB; Samsung 850 Evo 500GB; WD Black 3 TB; Phanteks 350x; Corsair RM19750w.

 

Laptop: Dell XPS 15 4K 9750H GTX 1650 16GB Ram 256GB SSD

Spoiler

sex hahaha

Link to comment
Share on other sites

Link to post
Share on other sites

It's not something I really care about - GTX *60 and R9 *70 cards don't really need much more added to the memory, except size. Those cards aren't really that power hungry or meant for heavy duty use like 4K or anything so it's pointless.

Well.. AMD can use this to breath a little bit of more life to 7970 and Hawaii which they will rebadge once again in 400 series.
Link to comment
Share on other sites

Link to post
Share on other sites

Meh? I don't see why a GPU manufacturer would opt for this over HBM. Rebrands are going to stick with GDDR5 to keep production costs down and new flagship GPUs will all feature HBM from now on. Pointless standard is pointless and should have been introduced about a year ago if they wanted it to get anywhere.

Quote

Ignis (Primary rig)
CPU
 i7-4770K                               Displays Dell U2312HM + 2x Asus VH236H
MB ASRock Z87M Extreme4      Keyboard Rosewill K85 RGB BR
RAM G.Skill Ripjaws X 16GB      Mouse Razer DeathAdder
GPU XFX RX 5700XT                    Headset V-Moda Crossfade LP2
PSU Lepa G1600
Case Corsair 350D
Cooling Corsair H90             
Storage PNY CS900 120GB (OS) + WD Blue 1TB

Quote

Server 01Alpha                                       Server 01Beta                            Chaos Box (Loaner Rig)                Router (pfSense)
CPU
 Xeon X5650                                      CPU 2x Xeon E5520                    CPU Xeon E3-1240V2                     CPU Xeon E3-1246V3
MB Asus P6T WS Pro                               MB EVGA SR-2                             MB ASRock H61MV-ITX                 MB ASRock H81 Pro BTC
RAM Kingston unbuffered ECC 24GB  RAM G.Skill Ripjaws 16GB         RAM Random Ebay RAM 12GB    RAM G.Skill Ripjaws 8GB
GPU XFX R5 220                                       GPU EVGA GTX 580 SC               GPU Gigabyte R9 295x2                GPU integrated
PSU Corsair CX430M                               PSU Corsair AX1200                   PSU Corsair GS700                         PSU Antec EA-380D
Case Norco RPC-450B 4U                      Case Rosewill  RSV-L4000C        Case Modified Bitfenix Prodigy   Case Norco RPC-250 2U
Cooling Noctua NH-U9S                        Cooling 2x CM Hyper 212 Evo  Cooling EVGA CLC 120mm           Cooling stock
Storage PNY CS900 120GB (OS)           Storage null                                 Storage PNY CS900 120GB (OS)  Storage Fujitsu 150GB HDD
               8x WD Red 1TB in Raid 6                                                                                WD Black 1TB    
               WD Green 2TB

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well.. AMD can use this to breath a little bit of more life to 7970 and Hawaii which they will rebadge once again in 400 series.

The 7970 isn't even in the 300 series. The 380 is 285, which is the Tonga chip (which aside from Fiji is the newest)

They won't rebadge Hawaii either. It's much too power hungry and much too big for the mid-level it'll need to fill next gen. A 16nm card makes so much more sense.

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

The 7970 isn't even in the 300 series. The 380 is 285, which is the Tonga chip (which aside from Fiji is the newest)

They won't rebadge Hawaii either. It's much too power hungry and much too big for the mid-level it'll need to fill next gen. A 16nm card makes so much more sense.

I was not serious. You don't have to prove anything.
Link to comment
Share on other sites

Link to post
Share on other sites

...........stupid question.

 

Why not just develop a GDDR6? I assume there is an actual difference between GDDR4 and GDDR5...if GDDR4 was ever actually a thing.

 

 

If not......reasons?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

...........stupid question.

Why not just develop a GDDR6? I assume there is an actual difference between GDDR4 and GDDR5...if GDDR4 was ever actually a thing.

If not......reasons?

Don't take my word for it, but as HBM becomes more increasingly developed and produced, traditional GDDR RAM will become increasingly obsolete, to the point where it may simply become another level of cache.

The problem with GDDR5 (and RAM, to be specific) is that power gating is becoming a larger part of the die, because it doesn't scale with the node shrinks. I think lower power consumption is really the name of the game at the moment.

Just my take though.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

Don't take my word for it, but as HBM becomes more increasingly developed and produced, traditional GDDR RAM will become increasingly obsolete, to the point where it may simply become another level of cache.

The problem with GDDR5 (and RAM, to be specific) is that power gating is becoming a larger part of the die, because it doesn't scale with the node shrinks.

I dont really know what power gating is, but I assume it's regulating the power to the chips themselves.

 

Make the card slightly bigger?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

...........stupid question.

 

Why not just develop a GDDR6? I assume there is an actual difference between GDDR4 and GDDR5...if GDDR4 was ever actually a thing.

 

 

If not......reasons?

 

Probably cheaper, since facilities are already set up to mass produce GDDR5.

 

High-end cards are transitioning to HBM. Any evolution of GDDR then will be relegated to mid- to low-end cards, or devices like consoles, so the main focus will be on improving things while having the least amount of impact on production cost.

 

As to why they don't just "call" it GDDR6, couldn't say. If it's only a modified version of GDDR5 then it makes more technical sense to call it a variation of that though, since that's what it is.

Link to comment
Share on other sites

Link to post
Share on other sites

Could be a good stopgap until yields with HBM improve(unless they already have and I missed the memo).

Link to comment
Share on other sites

Link to post
Share on other sites

I don't understand why X is always supposed to mean "more power" or something?

Blue Jay

CPU: Intel Core i7 6700k (OC'd 4.4GHz) Cooler: CM Hyper 212 Evo Mobo: MSI Z170A Gaming Pro Carbon GPU: EVGA GTX 950 SSC RAM: Crucial Ballistix Sport 8GB (1x8GB) SSD: Samsung 850 EVO 250 GB HDD: Seagate Barracuda 1TB Case: NZXT S340 Black/Blue PSU: Corsair CX430M

 

Other Stuff

Monitor: Acer H236HL BID Mouse: Logitech G502 Proteus Spectrum Keyboard: I don't even know Mouse Pad: SteelSeries QcK Headset: Turtle Beach X12

 

GitHub

Link to comment
Share on other sites

Link to post
Share on other sites

How does HBM stack up against this? Why would they make a "new" Gddr5 when they could just use even HBm 2.0

 Just because you don't care, doesn't mean other others don't. Don't be a self-centered asshole. -Thank You a PSA from the people who do not say random shit on the internet. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Good improvement for workstation cards, they need all the memory they can get in some cases. It also will help AMD and Nvidia with Virtual Workstation environments, where more than one user is accessing the GPU.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

How does HBM stack up against this? Why would they make a "new" Gddr5 when they could just use even HBm 2.0

 

Might be the fact that HBM isn't in large supply.  If gddr5x is just enhanced gddr5, then they can mass produce it much cheaper/easier/faster maybe.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't understand why X is always supposed to mean "more power" or something?

 

It's a traditional shorthand in the industry for eXtended. Due to that, the letter X nowadays has just become naturally associated with a newer or better version of something, whether that's something based on the same technology but refined, or simply something that's meant as a direct replacement/successor to the older tech, even if it's completely different from a technical standpoint. In this case, it seems to mean a better revision of GDDR5 technology. An extension, if you will ;)

Link to comment
Share on other sites

Link to post
Share on other sites

How does HBM stack up against this? Why would they make a "new" Gddr5 when they could just use even HBm 2.0

We're not a point where HBM is the accepted standard. Hell, HBM2 is only coming out for Pascal and Arctic Islands next year, it wouldn't make sense to put that sort of new tech into the low end cards. What GDDR5X is doing is increasing that accepted standard bit by bit, and until we have developed something which is vastly superior to HBM2 and makes it effectively 'old tech', don't expect HBM2 to be put on to low end cards yet (and by yet, I mean in a very long time).

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

this is great. if only they would do the same for DDR3 and DDR4 then everything would be good.

honestly though. Lower end cards would benefit a lot from this. Their ability to process stuff is limited indeed, but keeping them fed isnt hard... I bet there is a lot of overhead in those low end GPUs mostly due to waiting for data fetch. Higher transfer speeds should help a lot with this

Link to comment
Share on other sites

Link to post
Share on other sites

hmmm since all new high end cards will start supporting HBM/HBM-like memory my guess is these will be on midrange cards and normal GDDR5 will be on low end cards

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

If this replaces GDDR5 (since I'm sure HBM won't replace it in all cards) can GDDR5 (if not GDDR5X) replace 3 and we can finally let GDDR3 go into history?

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×