Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Nicnac

Next NVIDIA Line will be named "Hopper"

Recommended Posts

Posted · Original PosterOP

This leak is a few days old by now but I didn't see a forum post yet so I hope this hasn't been posted already ^^

 

NVIDIA has registered a new Trademark:

Quote

NVIDIA's architectures are always based on computer pioneers and this one appears to be no different. Nvidia's Hopper architecture is based on Grace Hopper, who was one of the pioneers of computer science and one of the first programmers of Harvard Mark 1, and also the inventor of the first linkers. She also popularized the idea of machine-independent programming languages, which led to the development of COBOL - an early high-level programming language still in use today. She enlisted in the Navy and helped the American War efforts during World War II.

grace-hopper.jpg

 

They also registered the name "Aerial" for what we don't know.

 

Speculations say it is possibly going to utilize an MCM (multi-chip-module) philosophy. That would be NVIDIA GlueTM then I guess...

(hey glueing stuff together wasn't such a bad idea after all huh?)

 

Be aware that this is still a lot of speculation and Hopper might be a line that might not even come to consumer hardware.

 

I think an MCM design is the logical next step and there are many analogies to how AMD is handling their CPU lines these days.

 

Sources: Videocardz and the Salt Mine

 

Edit:

There are a few more fun names on the list ^^

NVIDIA-Hopper-GPU-Trademark.png

 

OmNiVeRsE might be my favorite :P


New Rig yay

My Folding Stats

amen brother, yi yi !

X  Vigilo Confido  X

 

 

Link to post
Share on other sites

There may already be a thread in this in news.  I recall seeing that name and picture previously.  There may be new information in this post though.  Not sure.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
Posted · Original PosterOP
Just now, Bombastinator said:

There may already be a thread in this in news.  I recall seeing that name and picture previously.  There may be new information in this post though.  Not sure.

I checked until december 4th. if its earlier than that it might just be the rumour and not the registered trademark?

I guess mods will decide ^^


New Rig yay

My Folding Stats

amen brother, yi yi !

X  Vigilo Confido  X

 

 

Link to post
Share on other sites
3 minutes ago, Nicnac said:

I checked until december 4th. if its earlier than that it might just be the rumour and not the registered trademark?

I guess mods will decide ^^

It would have been before that. Late November possibly.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
35 minutes ago, Nicnac said:

hey glueing stuff together wasn't such a bad idea after all huh?

In terms of AMD CPU cores, hell yes

 

In terms of the FE cooler, hell no


I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k (won) - EVGA Z370 Classified K - G.Kill Trident Z RGB - Force MP500 - Jedi Order Titan Xp - The venerated Hyper 212 Evo (with RGB Riing flair) - EVGA G2 650W - Black and green theme, Razer branwashed me.

Draws 400 watts under max load, for reference.

 

Linux Proliant ML150 G6:

Dual Xeon X5560 - 24GB ECC DDR3 - GTX 750 TI - old Seagate 1.5TB HDD - Dark moded Ubuntu (and Win7, cuz why not)

 

EVGA G3 threadSeasonic Focus threadUserbenchmark (Et al.) is trash explained, PSU misconceptions, protections explainedgroup reg is bad

Link to post
Share on other sites

nVidia OMNIVERSE.... okay that one made me giggle xD'


Workstation Rig:
CPU:  Intel Core i9 9900K @5.0ghz  |~| Cooling: beQuiet! Dark Rock 4 |~|  MOBO: Asus Z390M ROG Maximus XI GENE |~| RAM: 32gb 3333mhz CL15 G.Skill Trident Z RGB |~| GPU: nVidia TITAN V  |~| PSU: beQuiet! Dark Power Pro 11 80Plus Platinum  |~| Boot: Intel 660p 2TB NVMe |~| Storage: 2X4TB HDD 7200rpm Seagate Iron Wolf + 2X2TB SSD SanDisk Ultra |~| Case: Cooler Master Case Pro 3 |~| Display: Acer Predator X34 3440x1440p100hz |~| OS: Windows 10 Pro.
 
Personal Use Rig:
CPU: Intel Core i9 9900 @4.75ghz |~| Cooling: beQuiet! Shadow Rock Slim |~| MOBO: Gigabyte Z390M Gaming mATX|~| RAM: 16gb DDR4 3400mhzCL15 Viper Steel |~| GPU: nVidia Founders Edition RTX 2080 Ti |~| PSU: beQuiet! Straight Power 11 80Plus Gold  |~|  Boot:  Intel 660p 2TB NVMe |~| Storage: 2x2TB SanDisk SSD Ultra 3D |~| Case: Cooler Master Case Pro 3 |~| Display: Viotek GN34CB 3440x1440p100hz |~| OS: Windows 10 Pro.


HTPC / "Console of the house":

CPU: Intel Core i7 8700 @4.45ghz |~| Cooling: Cooler Master Hyper 212X |~| MOBO: Gigabyte Z370M D3H mATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: nVidia Founders Edition GTX 1080 Ti |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.
Link to post
Share on other sites
1 hour ago, LeSheen said:

She looks very happy. Would feel right at home with the modern it-crowd 

Have you tried turning it off and back on again?


PLEASE QUOTE ME IF YOU ARE REPLYING TO ME
LinusWare Dev | NotCPUCores Dev

Desktop Build: Ryzen 7 1800X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 32GB Corsair DDR4 @ 3000MHz, RX480 8GB OC, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to post
Share on other sites

From https://www.tweaktown.com/news/69186/nvidia-next-gen-hopper-gpu-arrives-ampere-smash-intel-amd/index.html

 

69186_22_nvidia-next-gen-hopper-gpu-arrives-ampere-smash-intel-amd_full.jpg

 

This to me looks more like "SLI on a package" than say what AMD is doing with its recent processors.

Link to post
Share on other sites
12 minutes ago, Mira Yurizaki said:

This to me looks more like "SLI on a package" than say what AMD is doing with its recent processors.

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

 

PCIe 3.0 is roughly 1 GB/s per lane, so up to 16GB/s bandwidth on HEDT platforms with sufficient lanes, half that on consumer more likely on 8x+8x. Double for PCIe 4.0. 

Nvidia SLI bridge depends on configuration but is ball park GB/s, so less than PCIe, but is dedicated to the cards. PCIe also has to deal with loading stuff from system ram for example.

Nvidia nvlink, for RTX cards, from memory is 50GB/s. A good step up in bandwidth, but put in context, still far below vram bandwdith. Higher end ones are >500GB/s.

 

If the GPUlets are on the same package, and have some kind of high bandwidth, low latency connection between them, maybe you can get it to look more like a single GPU. 

 

BTW the first version of Intel's new GPU is also a chiplet-like arrangement. https://www.anandtech.com/show/15119/intels-xe-for-hpc-ponte-vecchio-with-chiplets-emib-and-foveros-on-7nm-coming-2021


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites
18 minutes ago, porina said:

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

Something that looks like what AMD is doing with Zen 2's Threadripper and EPYC, then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads.

 

The lingering concern I had was memory access can be indeterminate, but I think that's not really the case with GPUs, considering their workload can be determined ahead of time. So as long as the drivers can schedule memory access such that by the time the GPU chiplets are ready to work on the next thing, they'll have what they need, then the system should work.

 

The issue to overcome here is that current multi-GPU contain the same dataset in each of their VRAM pools. If you eliminate that, then I see an MCM-based GPU more plausible. Like if we take an RTX 2080 Ti with its 6 GPCs and make each GPC a chiplet, it won't necessarily need more bandwidth than what it normally uses if you can find a way to make them all use the same VRAM pool

Link to post
Share on other sites
13 minutes ago, Mira Yurizaki said:

Something that looks like what AMD is doing with Zen 2's Threadripper and EPYC, then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads.

Chiplet + IO arrangement would be particularly challenging for a GPU though. IF bandwidth is very low in comparison to GPU bandwidths. There will be major challenges in moving that much data around. Coherent NUMA-like approach is probably technically a lot easier to manage, providing inter-note bandwidth is high enough.

 

 

13 minutes ago, Mira Yurizaki said:

The issue to overcome here is that current multi-GPU contain the same dataset in each of their VRAM pools.

I'm not sure how it works, but when RTX was launched, the claims made by Jensen at the time in many presentations implied you can connect two cards by nvlink and get effective use of the combined vram. As said, the consumer nvlink I think is around 50MB/s bandwidth (comparable to dual channel 3200 ram). I think the professional nvlink is even higher.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites
4 minutes ago, porina said:

Chiplet + IO arrangement would be particularly challenging for a GPU though. IF bandwidth is very low in comparison to GPU bandwidths. There will be major challenges in moving that much data around. Coherent NUMA-like approach is probably technically a lot easier to manage, providing inter-note bandwidth is high enough.

My assumption is a smaller GPU means less bandwidth is needed to feed it. Something like an RTX 2080 Ti only needs 600+ GB/s of bandwidth because of all the execution units it has to feed. If you break them up and have them separated, presumably you don't need anywhere near that much per chiplet, but combined yes.

 

EPYC has to feed what is essentially 8 Ryzen 7 dies with its I/O chip. You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion. AMD solved the NUMA issue by making memory go through one source.

Link to post
Share on other sites
7 minutes ago, Mira Yurizaki said:

EPYC has to feed what is essentially 8 Ryzen 7 dies with its I/O chip. You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion. AMD solved the NUMA issue by making memory go through one source.

Don't know what ram speeds Epyc support, but let's assume it is running 8ch ram at 3200. I make that ball park 200 GB/s ram bandwidth going through IOD... so I guess in that sense, the 600GB+ isn't much further of a stretch.

 

Still, from what's been shown so far, it doesn't look like either nvidia or Intel are going this route with their modular GPUs.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites

@porina

NVlink 2.0 has a transmission rate of 25 GT/s per lane. Each sub link has 8 lanes, with single and dual sub-links on the RTX line 800GT/s or 100GB/s for dual sub-links. The V100 Mezzanine version having 6 sub-links totaling 2500 GT/s or 300GB/s.

 

Don't get me started on the NVswitch.

Link to post
Share on other sites

So much terminology.  There’s “gigabyte” gigabit” and “gt” whatever that is.  They’re all different and don’t have a whole lot to do with each other.  I personally find “gigabyte” to be the least useful since people haven’t really used 8 bit bytes since cp/m


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
9 hours ago, Nicnac said:

OmNiVeRsE

This is what comes to mind immediately ? Omnibot !

Image result for omnibot


PC: Alienware 15 R3  Cpu: 7700hq  GPu : 1070 OC   Display: 1080p IPS Gsync panel 60hz  Storage: 970 evo 250 gb / 970 evo plus 500gb 

Link to post
Share on other sites
9 minutes ago, Bombastinator said:

and “gt” whatever that is.

Gigatexels. From Wiki:

Quote


in case of texture fillrate the number of texture map elements (texels) GPU can map to pixels in a second.

 

 

5 hours ago, porina said:

I've long wondered, what would it take to make multiple GPUs look and feel like a single one? Probably some combination of bandwidth and latency.

 

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

 

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

4 hours ago, Mira Yurizaki said:

You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion.

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?


Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to post
Share on other sites
6 minutes ago, The1Dickens said:

Gigatexels. From Wiki:

 

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

 

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?

https://en.m.wikipedia.org/wiki/Texel_(graphics)

So it’s a semi BS term when describing data rates because the number of 1s and 0s can vary by how the texels are made.  It goes in the pile along with “gigabyte” then.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

"NVIDIA's architectures are always based on computer pioneers and this one appears to be no different."

Didn't they used to name them after scientists? Like Johannes Kepler, Nikola Tesla, Blaise Pascal and Enrico Fermi?


Mechanical keyboard aficionado, professional fox

Mechanical Keyboard Club | Don't buy "gaming" keyboards, yo

Please quote me so I can see that you replied.

 

Be proud of who you are.

Link to post
Share on other sites
13 minutes ago, The1Dickens said:

Didn't AMD have dual GPUs on a single board some time ago?

Yup, and they’re making custom ones for Apple now too.

Nvidia also made dual GPU cards years ago.


Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to post
Share on other sites
29 minutes ago, Bombastinator said:

I personally find “gigabyte” to be the least useful since people haven’t really used 8 bit bytes since cp/m

8 bits make a byte for as long as I've been into computing. You thinking of words? https://en.wikipedia.org/wiki/Word_(computer_architecture)

 

5 minutes ago, The1Dickens said:

Gigatexels.

GigaTransfers sprang to mind without looking it up. Maybe I should click that link...

 

5 minutes ago, The1Dickens said:

Didn't AMD have dual GPUs on a single board some time ago? Some information that seems to pop out in my head is R9 or Fury dual cards... That might be me remembering incorrectly, though.

They did, but I believe they were effectively crossfire on a card.

 

5 minutes ago, The1Dickens said:

And I'm guessing Ampere will still run on PCIe 3.0? So then Hopper will theoretically run on 4.0?

Following this thought train, does this mean that it would then be easier to use multiple cards in a system (in a system that would benefit) for even greater VRAM pool?

IMO even with both cards on 16x 4.0 the bandwidth is on the low side for two cards to share. 32GB/s is less than nvlink, which is in addition to PCIe.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites
14 minutes ago, porina said:

8 bits make a byte for as long as I've been into computing. You thinking of words? https://en.wikipedia.org/wiki/Word_(computer_architecture)

 

GigaTransfers sprang to mind without looking it up. Maybe I should click that link...

 

They did, but I believe they were effectively crossfire on a card.

 

IMO even with both cards on 16x 4.0 the bandwidth is on the low side for two cards to share. 32GB/s is less than nvlink, which is in addition to PCIe.

Exactly.  A quasi arbitrary number only used in computers for a brief time.  It’s more accurate than gigatexel which appears to be only very specifically useful, just as bytes were.

 

i have the suspicion that gigtexel came out of marketing from one company or another and was used when they started to have less bandwidth than their competitors and they wanted a way to point out “yeah, maybe, but our texels are smaller than yours!”  If they actually wanted to convey information it would have been defined as a larger than one modifier for each instance in each brand.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
5 hours ago, Mira Yurizaki said:

then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads

The problem with that which also effects SLI/Corssfire is it breaks post processing effects, certain lighting techniques and a few other things. That's why for a long time now SLI/Crossfire used AFR as it is the most compatible and also the simplest to implement.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×