Jump to content

GDDR6 Clockspeed Disparity - Nvidia v. AMD

So I've been fairly confused by the vastly different rated memory speeds for GDDR6 on Nvidia and AMD cards despite both brands having their memory rated for the same bandwidth. For example, the RTX 2070 Super has a memory clock of 1750 MHz and a memory bandwidth of 448 GB/s - in contrast the RX 5700XT has a memory clock of 875 MHz and the same memory bandwidth of 448 GB/s. The AMD spec appears to just be Nvidia's spec cut in half, so is this just a difference in reporting or marketing, or are they actually using the memory differently in any meaningful way? I could imagine that this would be similar to how DDR4 runs at 3200 MHz, but, due to being "double data rate," it actually runs at 1600 MHz, but twice per cycle. If there is no meaningful difference, why do the companies report it the way that they do? It seems like a situation similar to GPU temperatures where AMD openly discloses GPU Junction temperatures (Which look much worse) in addition to GPU edge temperatures while Nvidia exclusively makes use of GPU edge temperatures - AMD is being more honest but takes a hit in marketing appeal. The one potential advantage to Nvidia's method is to allow greater fine tuning for memory overclocking, but the overclocking disparity appears to be far greater than a simple difference in data rate reporting as Nvidia's GDDR6 memory actually overclocks while AMD's fights the user tooth and nail to get even a 25-50MHz overclock. Let me know your thoughts or insights, thanks!

Ryzen 5 1600 @ 3.8Ghz w/ Arctic Freezer 33 Tower Cooler | MSI B450 Tomahawk |  32GB Crucial Ballistix DDR4 3200MHz CAS 14 

Sapphire RX 5700XT Pulse | EVGA 650w GQ 80+ Gold Semi-Modular  |  XPG SX6000 512GB Nvme SSD | NZXT H500

Acer XF270HU - 1440p 144Hz Freeesync IPS | Corsair Strafe - Cherry MX Red  |  Logitech G502

Link to comment
Share on other sites

Link to post
Share on other sites

The 5700XT has 1750MHz "real" clock (14,000 effective) memory.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

It just depends on what software you use for reading the clocks, some show the single VRAM chip clock rate, some show the clock rate of all the chips combined and then some show the doubled rate for each of the aforementioned. So you could see each the 5700 XT or 2070 Super advertised in any of the below formats:

  • 875MHz
  • 1,750MHz
  • 7,000MHz
  • 14,000MHz

 

And then to figure out bandwidth it's just: 256 (memory interface width for those two cards) / 8 (to convert bits to bytes) * 14,000MHz (effective speed of all VRAM on card) = 448,000B/s.

 

You may see NVIDIA/AMD advertise something like 14Gbps GDDR6. We can get the clock rate simply by dividing the bitrate by 8, 14 / 8 = 1.75GHz. We can also calculate the per-chip bandwidth as we know the memory interface of GDDR6 chips are 32-bit. Simply do the same as before but this time times by the memory interface (32), 14 / 8 * 32 = 56GB/s. So if there was a NVIDIA card with 6 of these chips totalling 6GB of VRAM it would have 56 * 6 = 336GB/s of bandwidth, or if they had 8 chips it would be 448GB/s of bandwidth.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×