Jump to content

Ryzen Threadripper Delidding

4 minutes ago, zMeul said:

AMD "asked" him to take the video down

 

 

why? simple: Intel was 100% right when they said Threadripper was a glued together CPU

Are you referring to the literal glue? Or the MCM design? Because Intel has used MCM's as recently as Broadwell. Where were you when that came out? I don't recall you calling the Iris Pro "glued" together. You are starting to put less effort into this, and it worries me.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MageTank said:

Are you referring to the literal glue? Or the MCM design? Because Intel has used MCM's as recently as Broadwell. Where were you when that came out? I don't recall you calling the Iris Pro "glued" together. You are starting to put less effort into this, and it worries me.

except I'm not the one freaking out of their skin

you know what's funny? the "other" two dies are literally junk

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zMeul said:

except I'm not the one freaking out of their skin

you know what's funny? the "other" two dies are literally junk

Who is freaking out? Also, it's only junk if it serves no purpose. It was said that they serve to improve the rigidity of the substrate itself. With a die that large, it's probably a good idea for even mounting pressure to have four focal points instead of two (with half the side completely missing dies). 

 

The good news is, we have an open dialogue again :P 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, MageTank said:

Who is freaking out? Also, it's only junk if it serves no purpose. It was said that they serve to improve the rigidity of the substrate itself. With a die that large, it's probably a good idea for even mounting pressure to have four focal points instead of two (with half the side completely missing dies). 

 

The good news is, we have an open dialogue again :P 

to be read: AMD and TSMC are incapable of producing large dies, so they have to wing it on their knees xD

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zMeul said:

to be read: AMD and TSMC are incapable of producing large dies, so they have to wing it on their knees xD

Or, it simply makes sense from a financial standpoint to use multiple, higher yield dies to produce a higher core count. AMD has the superior interconnect, why not put it to work? Granted, AMD has always had the superior interconnect until NVLink dethroned them, but the IF's peak bandwidth is still more than 3x that of NVLink 2.0, coming in at a whopping 512GB/s peak. 

 

Insult that glue all you want, but it certainly makes for a cheaper alternative without compromising on performance that much, if any. Unless you have evidence pointing to the contrary. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, zMeul said:

to be read: AMD and TSMC are incapable of producing large dies, so they have to wing it on their knees xD

TSMC incapable? They're making a 815nm die for NVIDIA.....

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Taf the Ghost said:

RAM controllers would be an issue with any further-up TR parts beyond 16c. It's possible the core choices is more because it'd break their microcode programmers trying to deal with the issues that would crop up.

Interesting, I know a bit about Ryzen's memory controller, but I didn't know that.

What about EPYC though?  Those chips clearly come in greater than 16 cores.

 

Edit:

 

Der8auer: Tell me your secrets Threadripper...

AMD: NO bad Der8auer!

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, zMeul said:

why? simple: Intel was 100% right when they said Threadripper was a glued together CPU

Wow you're dull. 

 

It is not by ANY stretch just a bunch of dies glued together. It's so much more than just putting dies on a board. If it was that easy, why hasn't Intel done it first? Oh wait. They can't. 

 

AMD could have done a large power hungry die, they didn't. They went the smart way. Funny how AMD's "glued together" solution meets and beats intels offerings. 

 

But sure, go ahead and keep spewing that AMD hate. We love when you bash a superior product. 

Do you even fanboy bro?

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Liltrekkie said:

Wow you're dull. 

 

It is not by ANY stretch just a bunch of dies glued together. It's so much more than just putting dies on a board. If it was that easy, why hasn't Intel done it first? Oh wait. They can't. 

 

AMD could have done a large power hungry die, they didn't. They went the smart way. Funny how AMD's "glued together" solution meets and beats intels offerings. 

 

But sure, go ahead and keep spewing that AMD hate. We love when you bash a superior product. 

Don't worry, Intel has been sniffing glue for so long everything smells like glue ;)

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Liltrekkie said:

Wow you're dull. 

 

It is not by ANY stretch just a bunch of dies glued together. It's so much more than just putting dies on a board. If it was that easy, why hasn't Intel done it first? Oh wait. They can't.

actually, if i recall, i believe it was Linus who mentioned that "Glued Together" was an actual technical term that was 100% accurate to describe what amd did.

not bashing amd, but it is technically accurate. it just sounds a whole lot worse that in really is cuz Intel is dumb

How do Reavers clean their spears?

|Specs in profile|

The Wheel of Time turns, and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the Age that gave it birth comes again.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Don't worry, Intel has been sniffing glue for so long everything smells like glue ;)

 

Going back to their Pentium D days ;)

Do you even fanboy bro?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Tsuki said:

actually, if i recall, i believe it was Linus who mentioned that "Glued Together" was an actual technical term that was 100% accurate to describe what amd did.

not bashing amd, but it is technically accurate. it just sounds a whole lot worse that in really is cuz Intel is dumb

Linus also said, it's clear to me what they meant, that it was not being used in the technical word sense but as a direct derogatory jab at AMD. Those are not technical slides they were marketing slides and are designed to cover some technical aspects but only in how to compare products and terminology that Intel the company wants it's employees to use.

 

I don't talk to these people but my managers do and they are there to get a sale, actual technical aspects rarely come up in those discussions. I talk to sales engineers and product engineers and we talk specifics, no body would ever use glued together in those conversations even in the technical meaning of the word.

Link to comment
Share on other sites

Link to post
Share on other sites

IMG_0609.JPG.8958eb43fd08325e33f30dfb21b5a72f.JPG

 

:D 

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Tsuki said:

actually, if i recall, i believe it was Linus who mentioned that "Glued Together" was an actual technical term that was 100% accurate to describe what amd did.

not bashing amd, but it is technically accurate. it just sounds a whole lot worse that in really is cuz Intel is dumb

Glue Logic is a technical term. It also isn't correct for Ryzen because it refers to off the shelf components. Last I checked, you couldn't buy a single Ryzen CCX.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, MageTank said:

Or, it simply makes sense from a financial standpoint to use multiple, higher yield dies to produce a higher core count. AMD has the superior interconnect, why not put it to work? Granted, AMD has always had the superior interconnect until NVLink dethroned them, but the IF's peak bandwidth is still more than 3x that of NVLink 2.0, coming in at a whopping 512GB/s peak. 

 

Insult that glue all you want, but it certainly makes for a cheaper alternative without compromising on performance that much, if any. Unless you have evidence pointing to the contrary. 

 

1 hour ago, Liltrekkie said:

Wow you're dull. 

 

It is not by ANY stretch just a bunch of dies glued together. It's so much more than just putting dies on a board. If it was that easy, why hasn't Intel done it first? Oh wait. They can't. 

 

AMD could have done a large power hungry die, they didn't. They went the smart way. Funny how AMD's "glued together" solution meets and beats intels offerings. 

 

But sure, go ahead and keep spewing that AMD hate. We love when you bash a superior product. 

 

 

Every time you move from a manufacturing process to another one, there's a lot of unknowns a big risk , you just don't know how well the process will work initially.

Once the factory is finished, they start to produce a few hundred wafers (round silicon discs from which chips are then cut out) and at the end of a couple of months they get to cut the chips from those wafers and see how well the process worked, and how many actual working chips can be cut out of those wafers. 

Then, they study the flaws and optimize and tweak the process to reduce the number of defects to some maximum acceptable limits, which can take months...

 

Anyway, both AMD and nVidia moved to new processes.. AMD went with 14nm licensed from Samsung and nVidia went with 16nm from TSMC .. so both had to take risks with completely new processes.

IMHO, AMD wasn't as confident about Global Foundries' ability to get the process right from the start, because they had a bad history with transitions to new processes in the past - they were forced to stick with 32nm (Global Foundries) and 28 nm (TSMC) for more time than they wanted, and people started to see that (the lack of efficiency of FX series).

If I remember correctly, AMD designed some 20nm processors but never made (I think they were eventually made on 28nm at TSMC) because Global Foundries spent years trying to get that process working and in the end gave up and decided to license 14nm from Samsung just to have something working faster.

 

Anyway, going off topic... so because AMD wasn't so sure GF could produce wafers with small number of flaws, they decided to "test the waters" and reduce their risk by producing first smaller chips like Polaris 10, Polaris 11 and Ryzen ... this way, even if there's a lot of flaws on each wafer, they could still recover some dies by disabling stream processors or cores or level 3 cache and actually get something on the market. 

 

So Ryzen was by design made to be used as a "module" in processors like Threadripper or EPYC , the Infinity Fabric being a big deal  but you can be sure they were also influenced by the desire to reduce their risks and to be more flexible, to more easily adapt a design (of a smaller chip) from let's say 14nm to TSMC's 16nm in case GF fucked up and needed more time to get things working. AMD absolutely needed to produce some new processors, as the FX series and socket FM2 chips lagged behind Intel a lot, and they had no PCI-E 3.0 support and so on...

 

nVidia faced the same risks going with TSMC's 16nm but my guess was that they didn't care so much, they were better placed financially and the people buying them don't care as much about rebranded models and they're more "fan boys". Also, they could have just not release some new stuff for a few more months if there was really a problem... they weren't in as bad place as AMD was.

 

So they decided to go the opposite way, to risk it by going from the start with big dies (gtx 1080, 1070) and then as the factory optimizes the process, to go with the smaller more profitable dies.

 

9 hours ago, Cinnabar Sonar said:

Interesting, I know a bit about Ryzen's memory controller, but I didn't know that.

What about EPYC though?  Those chips clearly come in greater than 16 cores.

 

Edit:

 

Der8auer: Tell me your secrets Threadripper...

AMD: NO bad Der8auer!

 

Each die has 2 core complexes (CCX) and each CCX has  4 cores and 4 threads  and 1 channel memory controller, so a Ryzen die has up to 8 cores, 16 threads, 2 memory channels and 32 pci-e lanes but socket AM4 only exposes 24 (16 for graphics, 4 for m.2 slot and 4 to the chipset)

 

Threadripper puts two Ryzen dies on a substrate, and therefore you get up to 16 cores, 32 threads, 4 memory channels and 64 pci-e lanes (60 + 4 for chipset)

 

EPYC has four dies on a substrate, and therefore you get up to 32 cores, 64 threads, 8 memory channels and 128 pci-e lanes (124+4 going to chipset).

 

In dual processor systems, Infinity fabric repurposes a x16 pci-e link from each Ryzen die to connect directly to the equivalent Ryzen die in the other processor, so between the dies in the processors basically it's like there's two x64 pci-e links, one for each direction.

Because of this, with dual cpu EPYC you get up to 64 cores, 128 threads, up to 16 memory channels (2 TB of memory per CPU) but the number of pci-e links will remain 128, because the other 128 are used for interconnection between sockets.

 

a bit off topic is what i find even cooler, that each x16 can be split to up to 8 devices, so you can have 2 x8 , or 1x8 and 2x4 (for graphics and 2 x m.2) , or 4 x4 , or 1x8 and 1x4 and 4x1... basically it's designed nicely to be very flexible.

 

Someone could make (and I think Supermicro even did) a motherboard with something like 16 x sata3 ports + 24+ m.2 / nvme "slots" to make super fast storage servers. Stick a 3-400$ 8-12 core EPYC cpu in such a board and you got yourself a "balls to the wall" NAS motherboard.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, ravenshrike said:

Glue Logic is a technical term.

A damn shame too, I would have much rather Intel state that AMD's CPUs where duct taped together.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, mariushm said:

Each die has 2 core complexes (CCX) and each CCX has  4 cores and 4 threads  and 1 channel memory controller, so a Ryzen die has up to 8 cores, 16 threads, 2 memory channels and 32 pci-e lanes but socket AM4 only exposes 24 (16 for graphics, 4 for m.2 slot and 4 to the chipset)

If I'm not mistaken, I'm pretty sure each CCX has access to both memory channels, so it's more accurate to say that each die has two channels, not that each CCX has one.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, mariushm said:

Because of this, with dual cpu EPYC you get up to 64 cores, 128 threads, up to 16 memory channels (2 TB of memory per CPU) but the number of pci-e links will remain 128, because the other 128 are used for interconnection between sockets.

Interesting, while 128 PCIe lanes will be plenty for many, that might be a limitation for some.

17 minutes ago, mariushm said:

and 32 pci-e lanes but socket AM4 only exposes 24 (16 for graphics, 4 for m.2 slot and 4 to the chipset)

32 lanes?  Does this mean that AMD's later chip sets (AM4+?) would be able to take advantage of more lanes?

22 minutes ago, mariushm said:

In dual processor systems, Infinity fabric repurposes a x16 pci-e link from each Ryzen die to connect directly to the equivalent Ryzen die in the other processor, so between the dies in the processors basically it's like there's two x64 pci-e links, one for each direction.

Is this the reason why RAM controllers would be an issue with any further-up TR parts beyond 16c?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Jito463 said:

If I'm not mistaken, I'm pretty sure each CCX has access to both memory channels, so it's more accurate to say that each die has two channels, not that each CCX has one.

yeah... you can sort of picture the memory controller above both core complexes almost connected directly to the L3 cache which is split into 2x8MB chunks.

Theoretically, would be very easy for AMD to just cut out one core complex when making the zen apus in 2018

 

6 minutes ago, Cinnabar Sonar said:

Interesting, while 128 PCIe lanes will be plenty for many, that might be a limitation for some.

i don't think it's an issue. I think xeon's top at 44 pci-e lanes or something like that, but there's 4 socket systems so you may have more than 128 lanes.

Anyway, you can use expensive PLX chips to convert x16  to 2x16 or other schemes if you really lack lanes.

 

7 minutes ago, Cinnabar Sonar said:

 

32 lanes?  Does this mean that AMD's later chip sets (AM4+?) would be able to take advantage of more lanes?

 

The socket doesn't have extra pins for 32 pci-e lanes. AM4+ would need to have more pins so it would not be backwards compatible.  No, i think we're pretty much stuck on 24 pci-e lanes.

 

7 minutes ago, Cinnabar Sonar said:

Is this the reason why RAM controllers would be an issue with any further-up TR parts beyond 16c?

TR is max 2 dies, so max 16 core/32 threads and 2 x 2 channel, max 8 ddr4 sticks and 128? GB of memory. Don't think socket supports more ddr channels (no pins

I suppose they could make dies in future with more than 2 ccx but i doubt it.

If anything they'll probably add a 10/20 core threadripper in a few months - right now they'd want to push people into buying 12/16 core models and not steal sales from either higher end or the lower 8core ryzen and the dies are probably just too good to kill 3 cores from each die to make a 10 core TR now.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mariushm said:

Someone could make (and I think Supermicro even did) a motherboard with something like 16 x sata3 ports + 24+ m.2 / nvme "slots" to make super fast storage servers. Stick a 3-400$ 8-12 core EPYC cpu in such a board and you got yourself a "balls to the wall" NAS motherboard.

This is what Microsoft and AMD have partnered to do for Azure, very high IOPs low cost storage nodes. Some of Azure's highest IOPs VMs are ram disks and all data is wiped after a reboot of the VM, not nice if you don't have a way to work around that. Even a shutdown process wouldn't be acceptable because what if the VM gets forced reset? Data gone/old.

 

EYPC is actually a pretty big deal for storage servers in my view, you don't even need dual socket!

Link to comment
Share on other sites

Link to post
Share on other sites

Guys, EPYC is not glued together, it's stitched together (infinity fabric)

 

:D

Spoiler

In case you couldn't tell: /s

 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mariushm said:

i don't think it's an issue. I think xeon's top at 44 pci-e lanes or something like that, but there's 4 socket systems so you may have more than 128 lanes.

Anyway, you can use expensive PLX chips to convert x16  to 2x16 or other schemes if you really lack lanes.

Quad socket systems have limited number of PCIe slots and are never used for storage nodes so the increased PCIe lanes is moot. They are used to get more memory per system and to a slightly lesser degree more cores now days. Quad socket or higher systems a while ago used to be for more cores but the cores per socket is so high now days it's shifted to total memory and memory bandwidth.

 

There's a lot of quad socket systems on the market that only have 4 PCIe slots, there are 7 and 9 slot ones available too but either have reduced number of ram slots or only have x8 PCIe slots and no x16.

 

Edit:

Comparing in the same U height as 2P vs 4P

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not to surprised really, seeing the socket used, not making new one to save price and all. And dies are not wasted, but rather fake dies to offer structural integrity for IHS so yeah.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/27/2017 at 7:57 PM, Cinnabar Sonar said:

First off, damn!  That CPU did not want it's heat spreader removed.

Glad to see that Threadripper uses solder, i'm not surprised though.

What was surprising, was the fact that Threadripper has four dies.

That would mean that Threadripper is probably a reused EPYC CPU.  

Does this mean that we could see up to 32 cores on Threadripper if AMD wanted to?

Who cares? We all know it's just 4 dies glued together

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×