Jump to content

Chiplets! Chiplets Everywhere! - Intel Patent "Confirms" Work On Multi-Chip-Module GPUs

Lightwreather

Summary

A recent patent published by Intel (via Underfox) may be the keystone for its future graphics accelerator designs - and it utilizes the Multi-Chip Module (MCM) approach.

 

Quotes

Quote

Intel describes a series of graphics processors working in tandem to deliver a single frame. Intel's design points towards a hierarchy in workloads: a primary graphics processor coordinates the entire workload. And the company frames the MCM as a whole approach as a required step to guide silicon designers away from manufacturability, scalability, and power delivery problems that arise from increasing die sizes in the eternal search for performance.

According to Intel's patent, several graphics draw calls (instructions) travel to "a plurality" of graphics processors. Then, the first graphics processor essentially runs an initial draw pass of the entire scene. At this point, the graphics processor is merely creating visibility (and obstruction) data; it's deciding what to render, which is a high-speed operation to do on modern graphics processors. Then, a number of the tiles generated during this first pass go to the other available graphics processors. According to that initial visibility pass, they would be responsible for accurately rendering the scene corresponding to their tiles, which indicates what primitive is in each tile or shows where there is nothing to render.

It thus seems that Intel is looking at integrating tile-based checkerboard rendering (a feature used in today's GPUs) alongside distributed vertex position calculation (out of the initial frame pass). Finally, when all graphics processors have rendered their piece of the puzzle that is a single frame (including shading, lighting, and raytracing), their contributions are stitched up to present the final image on-screen. Ideally, this process would occur 60, 120, or even 500 times per second.

he method applies to "a single processor desktop system, a multiprocessor workstation system, a server system," as well as within a system-on-Chip design (SoC) for mobile. These graphics processors or embodiments, as Intel calls them, are even described as accepting instructions from RISC, CISC, or VLIW commands. But Intel seems to be taking a page straight out of AMD's playbook, explaining that their MCM design's "hub" nature could include a single die aggregating the Memory and I/O controllers.

Diagrams from Intel's patent

Diagrams from Intel's patent

 

My thoughts

Ayy, apparently Intel's GPUs are also getting into Multi-Chip Dies (Modules, etc.). And apparently, they're touting SLI/Crossfire -like gains in peformance but more. (To be clear, it's more like older GPUs in SLI according to their graphs.) It's worth noting that not all things detailed in patents see the light of day. But regardless, This is rather interesting actually (at least to me), maybe we'll see some of these in the upcoming Arc Alchemist, I highly doubt that, I feel like it would more likely be introduced in Battlemage or Celestial. Well, we're gonna have to wait and see what comes out of this.

 

Sources

Tom's Hardware

FreePatents

Underfox

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

Screw chiplets, just bring back SLI/Crossfire in general and start crapping out MXM adapter cards

4FF0B8FC-5A2D-46DE-8480-BB14F6D50488.jpeg.3c767259609844001acfbfb6fbb174db.jpeg

439A9B4E-9A95-4CC8-B6E6-B127471A86C5.jpeg.3a597ffe611fba0d995453f93ea8a316.jpeg

Get them scaling well and a bunch of mobile gpus could prove to be a viable solution for cost effective production and upgrades.

If the AIBs don’t have to make anything more than various types of host cards, the original manufacturers don’t have to make anything more than mxm cards

the future could be perfect but nobody listened to Asus in 2007

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, 8tg said:

Screw chiplets, just bring back SLI/Crossfire in general and start crapping out MXM adapter cards

Nvidia and AMD would have to unlock the usage of full width NVLink or GPU-GPU IF for this to ever work, neither want to do it. PCIe simply does not have the bandwidth or the protocol support inbuilt to make it all work properly.

 

All 12 links allowed and active NVLink 3.0 has 600GB/s bandwidth which should actually be plenty to get a transparent multiple GPU single logical GPU working across multiple cards. Even so it's quite difficult and why they are targeting single package MCM rather than multiple packages or entire cards.

 

Also replace MXM with SXM

nvidia-tesla-p100-sxm2-16gb-cowos-hbm2-nvlink.jpeg

 

images?q=tbn:ANd9GcR8VQpHPK2HMtFYxD4jLkP

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, 8tg said:

Screw chiplets, just bring back SLI/Crossfire in general and start crapping out MXM adapter cards

4FF0B8FC-5A2D-46DE-8480-BB14F6D50488.jpeg.3c767259609844001acfbfb6fbb174db.jpeg

439A9B4E-9A95-4CC8-B6E6-B127471A86C5.jpeg.3a597ffe611fba0d995453f93ea8a316.jpeg

Get them scaling well and a bunch of mobile gpus could prove to be a viable solution for cost effective production and upgrades.

If the AIBs don’t have to make anything more than various types of host cards, the original manufacturers don’t have to make anything more than mxm cards

the future could be perfect but nobody listened to Asus in 2007

Interesting concept but cooling is gonna be an issue and that dinky looking cooler would not hold up by todays gpu power draw standards, not to mention if its a cooler on the side of the card instead of ontop of the card then thatd make any beefy aircooling cards too long for cases to fit them

 

Plausible but only with a watercooling solution cause air aint gonna cut it, not to mention what about extreme ocers with their hardmodding and ln2 pots? If the die is facing downwards then thatd make cooling painful

 

Maybe having them facing up rather than down would solve most of the issues

Link to comment
Share on other sites

Link to post
Share on other sites

I feel like we knew this like a year ago?

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Arika S said:

I feel like we knew this like a year ago?

Maybe, or it was the story about AMD's or Nvidia's, they are all doing it and all have patents

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Maybe, or it was the story about AMD's or Nvidia's, they are all doing it and all have patents

Well, there was one about Intel using MCM for their Server chips (I think, not too sure), then there was AMD with their GPUs and Nvidia also for their Datacenter GPUs. Tho, I think there was also a die shot of Xe-HPG too that sparked something about them using Chiplets for that too. SO yea.

Again, not too sure

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Somerandomtechyboi said:

Interesting concept but cooling is gonna be an issue and that dinky looking cooler would not hold up by todays gpu power draw standards, not to mention if its a cooler on the side of the card instead of ontop of the card then thatd make any beefy aircooling cards too long for cases to fit them

 

Plausible but only with a watercooling solution cause air aint gonna cut it, not to mention what about extreme ocers with their hardmodding and ln2 pots? If the die is facing downwards then thatd make cooling painful

 

Maybe having them facing up rather than down would solve most of the issues

Ok, I’m gonna break this down because this entire reply caused me physical pain.

It was/is a concept. This product never saw commercial success because at the time there really wasn’t much point. However today with fabrication costs becoming a larger issue, having scalable smaller gpus may be enough of a cost savings measure to make things like this viable.


The cooler and board design are relics of their time and are completely irrelevant to any modern iteration of a card like this.

”what about extreme oc’ers” what about them

Why should any product ever cater to like 20 people across the world, that’s dumb as hell


Three mobile rtx 3060’s or whatever would be just shy of 300 watts, and could be handled individually by your average 95w 120mm downdraft cooler, or combined solutions like basically any triple fan gpu cooler ever made

the idea of bringing something like this to the mass market, or attempting to anyway, is about the same reason chiplet style designs for processors and gpus are becoming more and more viable

That design keeps yields high, makes scalable designs much easier to implement, and is overall a lot more cost effective than say, gigantic fermi tier die designs which have such a low yield rate that 1/10th of the entire silicon platter stock would be useless and turned into GTS 450’s.

 

Combine a scalable die design with a scalable host hardware system and suddenly you have a really cost effective, easy to manufacture, and modular system for computer hardware that’s consumer and manufacturer friendly.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, J-from-Nucleon said:

Well, there was one about Intel using MCM for their Server chips (I think, not too sure)

Yea, Sapphire Rapids. However these aren't really as directly transferable cross CPUs and GPUs

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, 8tg said:

It was/is a concept

 

51 minutes ago, Somerandomtechyboi said:

Interesting concept

Did you think i didnt know it was a concept? maybe itd be viable if the gpus were facing upwards but then again who the hell will even support multi gpu anyways? maybe good for server or workstation application where the programs can actually utilize more than 1 gpu

 

if the concept were to be updated, main thing would prob be either the use of a watercooler and/or flipping the gpus to make them face upward since that would eradicate most cooling problems, xocers will find their way to mod them anyways so pretty sure they can be ignored

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Somerandomtechyboi said:

 

Did you think i didnt know it was a concept? maybe itd be viable if the gpus were facing upwards but then again who the hell will even support multi gpu anyways? maybe good for server or workstation application where the programs can actually utilize more than 1 gpu

 

if the concept were to be updated, main thing would prob be either the use of a watercooler and/or flipping the gpus to make them face upward since that would eradicate most cooling problems, xocers will find their way to mod them anyways so pretty sure they can be ignored

You are missing the entire point and are instead dwelling on the cooling system of a prototype video card from 2007.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Chiplets are the future. High yields, scalability and high surface area without intense hot spots.

 

Biggest issue today are huge monolithic chips with low yields, to scale them you need to make whole new chip with more compute units and worst of all, you have all the heat concentrated in a single tiny spot. Imagine 8 chiplets spread across surface area of 8x10cm on PCB instead of having single big chip in lets say 3x3cm area.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

PCIe simply does not have the bandwidth or the protocol support inbuilt to make it all work properly

The standard is 19 years old now. We need an upgrade /s

✨FNIGE✨

Link to comment
Share on other sites

Link to post
Share on other sites

you might also want to hear about foveros direct, before new year.

MCM, tiling or whatever else there is out there.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, 8tg said:

Screw chiplets, just bring back SLI/Crossfire in general

Or just be like me and be 7 years behind on GPU's and still use crossfire on two r9 290's XD

RAM 32 GB of Corsair DDR4 3200Mhz            MOTHERBOARD ASUS ROG Crosshair VIII Dark Hero
CPU Ryzen 9 5950X             GPU dual r9 290's        COOLING custom water loop using EKWB blocks
STORAGE samsung 970 EVo plus 2Tb Nvme, Samsung 850 EVO 512GB, WD Red 1TB,  Seagate 4 TB and Seagate Exos X18 18TB

Psu Corsair AX1200i
MICROPHONE RODE NT1-A          HEADPHONES Massdrop & Sennheiser HD6xx
MIXER inkel mx-1100   peripherals Corsair k-95 (the og 18G keys one)  and a Corsair scimitar

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, StephanTW said:

Or just be like me and be 7 years behind on GPU's and still use crossfire on two r9 290's XD

Nice dual 290's are still great, was also running 2x 290X until like 3-4 months ago

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×