Jump to content

Next NVIDIA Line will be named "Hopper"

Nicnac

This leak is a few days old by now but I didn't see a forum post yet so I hope this hasn't been posted already ^^

 

NVIDIA has registered a new Trademark:

Quote

NVIDIA's architectures are always based on computer pioneers and this one appears to be no different. Nvidia's Hopper architecture is based on Grace Hopper, who was one of the pioneers of computer science and one of the first programmers of Harvard Mark 1, and also the inventor of the first linkers. She also popularized the idea of machine-independent programming languages, which led to the development of COBOL - an early high-level programming language still in use today. She enlisted in the Navy and helped the American War efforts during World War II.

grace-hopper.jpg

 

They also registered the name "Aerial" for what we don't know.

 

Speculations say it is possibly going to utilize an MCM (multi-chip-module) philosophy. That would be NVIDIA GlueTM then I guess...

(hey glueing stuff together wasn't such a bad idea after all huh?)

 

Be aware that this is still a lot of speculation and Hopper might be a line that might not even come to consumer hardware.

 

I think an MCM design is the logical next step and there are many analogies to how AMD is handling their CPU lines these days.

 

Sources: Videocardz and the Salt Mine

 

Edit:

There are a few more fun names on the list ^^

NVIDIA-Hopper-GPU-Trademark.png

 

OmNiVeRsE might be my favorite :P

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

There may already be a thread in this in news.  I recall seeing that name and picture previously.  There may be new information in this post though.  Not sure.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Bombastinator said:

There may already be a thread in this in news.  I recall seeing that name and picture previously.  There may be new information in this post though.  Not sure.

I checked until december 4th. if its earlier than that it might just be the rumour and not the registered trademark?

I guess mods will decide ^^

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Nicnac said:

I checked until december 4th. if its earlier than that it might just be the rumour and not the registered trademark?

I guess mods will decide ^^

It would have been before that. Late November possibly.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Nicnac said:

hey glueing stuff together wasn't such a bad idea after all huh?

In terms of AMD CPU cores, hell yes

 

In terms of the FE cooler, hell no

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

nVidia OMNIVERSE.... okay that one made me giggle xD'

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LeSheen said:

She looks very happy. Would feel right at home with the modern it-crowd 

Have you tried turning it off and back on again?

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Princess Luna said:

nVidia OMNIVERSE.... okay that one made me giggle xD'

I'd prefer a Multiverse that includes AMD :P

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

From https://www.tweaktown.com/news/69186/nvidia-next-gen-hopper-gpu-arrives-ampere-smash-intel-amd/index.html

 

69186_22_nvidia-next-gen-hopper-gpu-arrives-ampere-smash-intel-amd_full.jpg

 

This to me looks more like "SLI on a package" than say what AMD is doing with its recent processors.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Mira Yurizaki said:

This to me looks more like "SLI on a package" than say what AMD is doing with its recent processors.

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

 

PCIe 3.0 is roughly 1 GB/s per lane, so up to 16GB/s bandwidth on HEDT platforms with sufficient lanes, half that on consumer more likely on 8x+8x. Double for PCIe 4.0. 

Nvidia SLI bridge depends on configuration but is ball park GB/s, so less than PCIe, but is dedicated to the cards. PCIe also has to deal with loading stuff from system ram for example.

Nvidia nvlink, for RTX cards, from memory is 50GB/s. A good step up in bandwidth, but put in context, still far below vram bandwdith. Higher end ones are >500GB/s.

 

If the GPUlets are on the same package, and have some kind of high bandwidth, low latency connection between them, maybe you can get it to look more like a single GPU. 

 

BTW the first version of Intel's new GPU is also a chiplet-like arrangement. https://www.anandtech.com/show/15119/intels-xe-for-hpc-ponte-vecchio-with-chiplets-emib-and-foveros-on-7nm-coming-2021

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, porina said:

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

Something that looks like what AMD is doing with Zen 2's Threadripper and EPYC, then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads.

 

The lingering concern I had was memory access can be indeterminate, but I think that's not really the case with GPUs, considering their workload can be determined ahead of time. So as long as the drivers can schedule memory access such that by the time the GPU chiplets are ready to work on the next thing, they'll have what they need, then the system should work.

 

The issue to overcome here is that current multi-GPU contain the same dataset in each of their VRAM pools. If you eliminate that, then I see an MCM-based GPU more plausible. Like if we take an RTX 2080 Ti with its 6 GPCs and make each GPC a chiplet, it won't necessarily need more bandwidth than what it normally uses if you can find a way to make them all use the same VRAM pool

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Mira Yurizaki said:

Something that looks like what AMD is doing with Zen 2's Threadripper and EPYC, then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads.

Chiplet + IO arrangement would be particularly challenging for a GPU though. IF bandwidth is very low in comparison to GPU bandwidths. There will be major challenges in moving that much data around. Coherent NUMA-like approach is probably technically a lot easier to manage, providing inter-note bandwidth is high enough.

 

 

13 minutes ago, Mira Yurizaki said:

The issue to overcome here is that current multi-GPU contain the same dataset in each of their VRAM pools.

I'm not sure how it works, but when RTX was launched, the claims made by Jensen at the time in many presentations implied you can connect two cards by nvlink and get effective use of the combined vram. As said, the consumer nvlink I think is around 50MB/s bandwidth (comparable to dual channel 3200 ram). I think the professional nvlink is even higher.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, porina said:

Chiplet + IO arrangement would be particularly challenging for a GPU though. IF bandwidth is very low in comparison to GPU bandwidths. There will be major challenges in moving that much data around. Coherent NUMA-like approach is probably technically a lot easier to manage, providing inter-note bandwidth is high enough.

My assumption is a smaller GPU means less bandwidth is needed to feed it. Something like an RTX 2080 Ti only needs 600+ GB/s of bandwidth because of all the execution units it has to feed. If you break them up and have them separated, presumably you don't need anywhere near that much per chiplet, but combined yes.

 

EPYC has to feed what is essentially 8 Ryzen 7 dies with its I/O chip. You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion. AMD solved the NUMA issue by making memory go through one source.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Mira Yurizaki said:

EPYC has to feed what is essentially 8 Ryzen 7 dies with its I/O chip. You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion. AMD solved the NUMA issue by making memory go through one source.

Don't know what ram speeds Epyc support, but let's assume it is running 8ch ram at 3200. I make that ball park 200 GB/s ram bandwidth going through IOD... so I guess in that sense, the 600GB+ isn't much further of a stretch.

 

Still, from what's been shown so far, it doesn't look like either nvidia or Intel are going this route with their modular GPUs.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@porina

NVlink 2.0 has a transmission rate of 25 GT/s per lane. Each sub link has 8 lanes, with single and dual sub-links on the RTX line 800GT/s or 100GB/s for dual sub-links. The V100 Mezzanine version having 6 sub-links totaling 2500 GT/s or 300GB/s.

 

Don't get me started on the NVswitch.

Link to comment
Share on other sites

Link to post
Share on other sites

So much terminology.  There’s “gigabyte” gigabit” and “gt” whatever that is.  They’re all different and don’t have a whole lot to do with each other.  I personally find “gigabyte” to be the least useful since people haven’t really used 8 bit bytes since cp/m

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Nicnac said:

OmNiVeRsE

This is what comes to mind immediately ? Omnibot !

Image result for omnibot

PC: Alienware 15 R3  Cpu: 7700hq  GPu : 1070 OC   Display: 1080p IPS Gsync panel 60hz  Storage: 970 evo 250 gb / 970 evo plus 500gb

Audio: Sennheiser HD 6xx  DAC: Schiit Modi 3E Amp: Schiit Magni Heresy

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Bombastinator said:

and “gt” whatever that is.

Gigatexels. From Wiki:

Quote


in case of texture fillrate the number of texture map elements (texels) GPU can map to pixels in a second.

 

 

5 hours ago, porina said:

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

 

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

 

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

4 hours ago, Mira Yurizaki said:

You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion.

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?

Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, The1Dickens said:

Gigatexels. From Wiki:

 

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

 

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?

https://en.m.wikipedia.org/wiki/Texel_(graphics)

So it’s a semi BS term when describing data rates because the number of 1s and 0s can vary by how the texels are made.  It goes in the pile along with “gigabyte” then.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

"NVIDIA's architectures are always based on computer pioneers and this one appears to be no different."

Didn't they used to name them after scientists? Like Johannes Kepler, Nikola Tesla, Blaise Pascal and Enrico Fermi?

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, The1Dickens said:

Didn't AMD have dual GPUs on a single board some time ago?

Yup, and they’re making custom ones for Apple now too.

Nvidia also made dual GPU cards years ago.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Bombastinator said:

I personally find “gigabyte” to be the least useful since people haven’t really used 8 bit bytes since cp/m

8 bits make a byte for as long as I've been into computing. You thinking of words? https://en.wikipedia.org/wiki/Word_(computer_architecture)

 

5 minutes ago, The1Dickens said:

Gigatexels.

GigaTransfers sprang to mind without looking it up. Maybe I should click that link...

 

5 minutes ago, The1Dickens said:

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

They did, but I believe they were effectively crossfire on a card.

 

5 minutes ago, The1Dickens said:

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?

IMO even with both cards on 16x 4.0 the bandwidth is on the low side for two cards to share. 32GB/s is less than nvlink, which is in addition to PCIe.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

8 bits make a byte for as long as I've been into computing. You thinking of words? https://en.wikipedia.org/wiki/Word_(computer_architecture)

 

GigaTransfers sprang to mind without looking it up. Maybe I should click that link...

 

They did, but I believe they were effectively crossfire on a card.

 

IMO even with both cards on 16x 4.0 the bandwidth is on the low side for two cards to share. 32GB/s is less than nvlink, which is in addition to PCIe.

Exactly.  A quasi arbitrary number only used in computers for a brief time.  It’s more accurate than gigatexel which appears to be only very specifically useful, just as bytes were.

 

i have the suspicion that gigtexel came out of marketing from one company or another and was used when they started to have less bandwidth than their competitors and they wanted a way to point out “yeah, maybe, but our texels are smaller than yours!”  If they actually wanted to convey information it would have been defined as a larger than one modifier for each instance in each brand.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Mira Yurizaki said:

then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads

The problem with that which also effects SLI/Corssfire is it breaks post processing effects, certain lighting techniques and a few other things. That's why for a long time now SLI/Crossfire used AFR as it is the most compatible and also the simplest to implement.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×