Jump to content

Ryzen 2700X OCed to 4.3Ghz (1.4v) across all cores, performance numbers included.

Master Disaster

That cinebench doesn't seem real to me.. was expecting more from 4.3Ghz... my 1700 when I had it OC'ed to 3.9Ghz per core scored 1723, was expecting more from a 400Mhz bump in speed per core.

Please quote my post, or put @paddy-stone if you want me to respond to you.

Spoiler
  • PCs:- 
  • Main PC build  https://uk.pcpartpicker.com/list/2K6Q7X
  • ASUS x53e  - i7 2670QM / Sony BD writer x8 / Win 10, Elemetary OS, Ubuntu/ Samsung 830 SSD
  • Lenovo G50 - 8Gb RAM - Samsung 860 Evo 250GB SSD - DVD writer
  •  
  • Displays:-
  • Philips 55 OLED 754 model
  • Panasonic 55" 4k TV
  • LG 29" Ultrawide
  • Philips 24" 1080p monitor as backup
  •  
  • Storage/NAS/Servers:-
  • ESXI/test build  https://uk.pcpartpicker.com/list/4wyR9G
  • Main Server https://uk.pcpartpicker.com/list/3Qftyk
  • Backup server - HP Proliant Gen 8 4 bay NAS running FreeNAS ZFS striped 3x3TiB WD reds
  • HP ProLiant G6 Server SE316M1 Twin Hex Core Intel Xeon E5645 2.40GHz 48GB RAM
  •  
  • Gaming/Tablets etc:-
  • Xbox One S 500GB + 2TB HDD
  • PS4
  • Nvidia Shield TV
  • Xiaomi/Pocafone F2 pro 8GB/256GB
  • Xiaomi Redmi Note 4

 

  • Unused Hardware currently :-
  • 4670K MSI mobo 16GB ram
  • i7 6700K  b250 mobo
  • Zotac GTX 1060 6GB Amp! edition
  • Zotac GTX 1050 mini

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

All I care about is Canard PC's claim that whatever technology supposedly let them hit 5ghz on air with zen 1 engineering sample/board is actually enabled on zen+. If that's legit of course

https://mobile.twitter.com/d0cTB/status/979433595528974346

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Swatson said:

All I care about is Canard PC's claim that whatever technology supposedly let them hit 5ghz on air with zen 1 is actually enabled on zen 2. If that's legit of course

The 5 Ghz part very likely exists, as there are HP libraries for 14nm at GloFo, though it's from the IBM side of things. Issues would have been die size, yield and heat. What's actually surprising is that one of the working samples ever got outside of the AMD Labs. 

 

Given some of the rumors about AMD that have come out, and mostly they seem to come out of the European side of things, I think someone at one of AMD's R&D facilities is rather a bit chatty these days. Actually, it's probably a former IBMer that works for GloFo now, considering the era when more interesting leaks started.

 

Things like the 5 die rumor for future Epyc designs are things in the R&D phase. Mostly because we know "Chiplets" are coming pretty soon, but it could still be a few generations before we see them at the mass-production scale. Those types of advances take upwards of a decade to go from initial design to mass scale available. So there's a lot of interesting ideas we're never really going to hear about, unless they become viable.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm really wondering if it's worth upgrading from the 1800x now... Only a 108 more points in Cinebench....

Capture22.JPG

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Evanair said:

I'm really wondering if it's worth upgrading from the 1800x now... Only a 108 more points in Cinebench....

 

I'd say wait till they show the (im assuming it will be called this) 2800X and then see.

Use this guide to fix text problems in your postGo here and here for all your power supply needs

 

New Build Currently Under Construction! See here!!!! -----> 

 

Spoiler

Deathwatch:[CPU I7 4790K @ 4.5GHz][RAM TEAM VULCAN 16 GB 1600][MB ASRock Z97 Anniversary][GPU XFX Radeon RX 480 8GB][STORAGE 250GB SAMSUNG EVO SSD Samsung 2TB HDD 2TB WD External Drive][COOLER Cooler Master Hyper 212 Evo][PSU Cooler Master 650M][Case Thermaltake Core V31]

Spoiler

Cupid:[CPU Core 2 Duo E8600 3.33GHz][RAM 3 GB DDR2][750GB Samsung 2.5" HDD/HDD Seagate 80GB SATA/Samsung 80GB IDE/WD 325GB IDE][MB Acer M1641][CASE Antec][[PSU Altec 425 Watt][GPU Radeon HD 4890 1GB][TP-Link 54MBps Wireless Card]

Spoiler

Carlile: [CPU 2x Pentium 3 1.4GHz][MB ASUS TR-DLS][RAM 2x 512MB DDR ECC Registered][GPU Nvidia TNT2 Pro][PSU Enermax][HDD 1 IDE 160GB, 4 SCSI 70GB][RAID CARD Dell Perc 3]

Spoiler

Zeonnight [CPU AMD Athlon x2 4400][GPU Sapphire Radeon 4650 1GB][RAM 2GB DDR2]

Spoiler

Server [CPU 2x Xeon L5630][PSU Dell Poweredge 850w][HDD 1 SATA 160GB, 3 SAS 146GB][RAID CARD Dell Perc 6i]

Spoiler

Kero [CPU Pentium 1 133Mhz] [GPU Cirrus Logic LCD 1MB Graphics Controller] [Ram 48MB ][HDD 1.4GB Hitachi IDE]

Spoiler

Mining Rig: [CPU Athlon 64 X2 4400+][GPUS 9 RX 560s, 2 RX 570][HDD 160GB something][RAM 8GBs DDR3][PSUs 1 Thermaltake 700w, 2 Delta 900w 120v Server modded]

RAINBOWS!!!

 

 QUOTE ME SO I CAN SEE YOUR REPLYS!!!!

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Evanair said:

I'm really wondering if it's worth upgrading from the 1800x now... Only a 108 more points in Cinebench....

Capture22.JPG

Do you have a 1080 TI or Titan? If not, you're not going to get enough to be worth the cost. Unless you just like to upgrade constantly.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Taf the Ghost said:

Do you have a 1080 TI or Titan? If not, you're not going to get enough to be worth the cost. Unless you just like to upgrade constantly.

Id say do a 1.5 volt overclock to extend its "lifespan" by a year in terms of game usage by killing it with voltage. Should last a year

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Evanair said:

I'm really wondering if it's worth upgrading from the 1800x now...

Why would you want to upgrade if you have a 1800x? 

 

In the CPU market in general people don't upgrade every generation. Performance Gains just aren't that fast...

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Humbug said:

Why would you want to upgrade if you have a 1800x? 

 

In the CPU market in general people don't upgrade every generation. Performance Gains just aren't that fast...

Now, if Apple made a processor, maybe everyone would upgrade every 4 months... /s

 

But agreed. I have a 1700x, and would upgrade if there would be significant gains, otherwise I'm happy with what I've got. Upgrades should provide decent steps in improvement to make the financial investment worth it. This CPU looks good for someone who is still rocking older hardware, but probably not enough to get someone to upgrade from first gen Ryzen.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Humbug said:

Why would you want to upgrade if you have a 1800x? 

 

In the CPU market in general people don't upgrade every generation. Performance Gains just aren't that fast...

It's the first revision of the Zen Architecture, so it offers the chance to a larger increase than most generational improvements. He specifically has a good chip (4.0 Ghz at 1.33v with decent RAM timings), but if he had a weaker one, there might actually be an upgrade path.

 

Also, my point about the 1080 Ti/Titan Xp applies. All of the early testing is getting much higher speed memory with good timings without a lot of work. If you have a ~$1000USD GPU, there's actually room for an upgrade by going from 1800X to 2700X, from all of the early information. Being able to drive the L3 Cache & Memory latency down by a fairly good % (upwards of 25% looks possible, when you compare weaker Zen to normal Zen+ dies), which has a much larger impact on Gaming than it does almost any other task.

 

If one has high-end parts already, and feels a desire to upgrade, there's something useful there. Otherwise, it's probably "wait for Zen2". 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Taf the Ghost said:

The 5 Ghz part very likely exists, as there are HP libraries for 14nm at GloFo, though it's from the IBM side of things. Issues would have been die size, yield and heat. What's actually surprising is that one of the working samples ever got outside of the AMD Labs. 

 

Given some of the rumors about AMD that have come out, and mostly they seem to come out of the European side of things, I think someone at one of AMD's R&D facilities is rather a bit chatty these days. Actually, it's probably a former IBMer that works for GloFo now, considering the era when more interesting leaks started.

 

Things like the 5 die rumor for future Epyc designs are things in the R&D phase. Mostly because we know "Chiplets" are coming pretty soon, but it could still be a few generations before we see them at the mass-production scale. Those types of advances take upwards of a decade to go from initial design to mass scale available. So there's a lot of interesting ideas we're never really going to hear about, unless they become viable.

 

 

They wouldn't have used two different set of node types, tape out on a different node (HP or LP) is like 10 million dollar or more effort.  10 million bucks if its automated layout, with custom layouts like what AMD uses in CPU's, you are looking at 3 to 5 times (10 years ago) that cost maybe more now because of the complexity increases

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Razor01 said:

 

 

They wouldn't have used two different set of node types, tape out on a different node (HP or LP) is like 10 million dollar or more effort.  10 million bucks if its automated layout, with custom layouts like what AMD uses in CPU's, you are looking at 3 to 5 times (10 years ago) that cost maybe more now because of the complexity increases

Well, the problem is we don't know anything about the supposed 5 Ghz part. Was it on the GloFo 14nm HP libraries or on the IBM-spec node? I don't think it'd be the IBM node, but GloFo offered 3 different library sets for 14nm. AMD would have taped out Zeppelin on all 3. Given the nature of the 14nm node, even on HP libraries, 5 Ghz would have been a lot of heat.

 

As for the 5 chip design, "Chiplet" is coming pretty soon. Chiplet approaches are just that: chunks of dies "glued" together to form a full SoC. In theory, the L3 Cache and the I/O doesn't need to be on the same node as the Cores, but the reality is connecting the different parts is an engineering nightmare. Yet, we know everyone is working on it because it drastically improves yields.  In AMD's specific case, the first generation of the approach is likely to be in the server SKUs. What they'd want to do is separate the Cores + L3 from the rest of the connections, tie those into a central controller die (which would have all of the DDR controllers), functionally rendering a 4 Super CCX SoC. Even with the Zen communication approach, this would end up allowing them to produce 4-8 CPU server parts for the huge workloads. 

 

I really doubt we're seeing that with Zen2. That's more likely Zen5. It's going to take a couple of years of R&D to work out the kinks, of which there would be many. As it stands, the Epyc approach uses about 15% more die space, but it increases the yields massively. They could push, on 7nm, 16c + L3 Cache at probably around 120 mm2. When a node is in bad shape, that's about double the yield of a 220 mm2 die. That's really why everyone is looking at Chiplet.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Taf the Ghost said:

Well, the problem is we don't know anything about the supposed 5 Ghz part. Was it on the GloFo 14nm HP libraries or on the IBM-spec node? I don't think it'd be the IBM node, but GloFo offered 3 different library sets for 14nm. AMD would have taped out Zeppelin on all 3. Given the nature of the 14nm node, even on HP libraries, 5 Ghz would have been a lot of heat.

 

I don't think that 5 Ghz rumor is real lol.  Not for Zen

 

Quote

As for the 5 chip design, "Chiplet" is coming pretty soon. Chiplet approaches are just that: chunks of dies "glued" together to form a full SoC. In theory, the L3 Cache and the I/O doesn't need to be on the same node as the Cores, but the reality is connecting the different parts is an engineering nightmare. Yet, we know everyone is working on it because it drastically improves yields.  In AMD's specific case, the first generation of the approach is likely to be in the server SKUs. What they'd want to do is separate the Cores + L3 from the rest of the connections, tie those into a central controller die (which would have all of the DDR controllers), functionally rendering a 4 Super CCX SoC. Even with the Zen communication approach, this would end up allowing them to produce 4-8 CPU server parts for the huge workloads. 

 

There needs to be software changes for this to even be remotely usable.  Ya know the only person that came up with this "theory" if you even want to call it that is a youtuber that if famously know for taking things out of his arse.  Just go back to conlake and look at what the low end B boards are doing with Coffee Lake now, does it add up?

 

You can't break out control silicon from a CPU and expect the same latency increase as we see with Ryzen, you need to expect it to increase considerably more.  Right now infinity fabric with CCX cross talk increases latency in the neighborhood of 100 to 300% depending on which cores and distances of the cross talk is happening.  Now imagine adding in the latency of mundane control silicon on top of that?  Doesn't make much sense to do so.  Right now we can't hide the current latency Ryzen's cross talk gives us lol, imagine adding another layer on top of that?

 

Quote

I really doubt we're seeing that with Zen2. That's more likely Zen5. It's going to take a couple of years of R&D to work out the kinks, of which there would be many. As it stands, the Epyc approach uses about 15% more die space, but it increases the yields massively. They could push, on 7nm, 16c + L3 Cache at probably around 120 mm2. When a node is in bad shape, that's about double the yield of a 220 mm2 die. That's really why everyone is looking at Chiplet.

 

Only if the node can't support manufacturable yields for single large dies to begin with.  Look we have much larger GPU's with higher transistor counts on GF14nm than the current CCX modules of Ryzen.  CCX modules and yields in this sense makes no sense lol.  There were other reasons which are more pertinent for AMD to make such a design change.  You can't look at yields dropping off a cliff because of die size in this context.  Vega is what 2 times the die size of a CCX module?  Are we thinking its getting only 50% of the yields of a CCX module?  if a CCX module is getting 100% yields does that mean Vega is at 50% yields?  that is unmanufacturable for Vega, this is a mass produced product.  @ 50% yields with lets say around 6k per wafer, they would only be getting around 100 chips per wafer, that is both Vega 56 and 64, 60 bucks per chip + interposer + ram, we are looking at a package cost of 250 bucks for AMD, too much because AMD needs to make at least 30% profit when selling these right (that is bare minimum, they are doing that for Polaris, Vega I hope they are looking at 50%)? 

 

Had a discussion with Volta and Titan V about this where people said nV might be just getting Volta out and making a loss because of poor yields lol on a 12nm process, which made no sense.  Where I stated they can roll out a second tier Quadro at higher prices if that was the case.  Quadro Volta was released a week after my comments.  Yeah now we can say yields of fully functional V100 parts are much higher than 60%, they have to be.  And this chip is how much bigger than any current generation chips?  Double the size.  But Volta, V100 is made with redundant parts where most of the errors will fall in, the SM units, it has 4 extra SM's that are redundant specific to increase chip yields.  If an error falls in a part of the chip that can't be fixed, like in the IO, they will shut that off, like we saw with Titan V, one memory bank was shut off.  If it falls in a control silicon and its not fixable, the chip is gone.  But we have to look at what takes up most of the die space in a chip.  A GPU its the SM units, SM units for Volta is going take up 70% of the die (if we look at Pascal's die shot, it can be higher % for SM's and die space for Volta), 15% of the die for control silicon and 15% for the bus.  Since 12nm is an off shoot of 16nm with less layers, its yields are going to be better than that of 16nm for a chip of similar sizes.  nV was anticipating at least getting 70% yields with Volta's die size and they got it. 

 

Back to CPU's and yields, after risk production they know what the yields are going to be for chips, actually even before risk production they have a good idea based on target die size and node. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Razor01 said:

Only if the node can't support manufacturable yields for single large dies to begin with.  Look we have much larger GPU's with higher transistor counts on GF14nm than the current CCX modules of Ryzen. 

Remind me, what clocks do GPUs run at?  Now, what clocks do CPU's run at?

 

Titan V vs i9 extreme, similar power usage and die size but vastly different clock rates and process.

 

Ryzen is getting double the clocks out of the same process used for GPUs.  While this primarily comes down to design, it was also likely required that the dies be smaller for better yields.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, KarathKasun said:

Remind me, what clocks do GPUs run at?  Now, what clocks do CPU's run at?

 

Titan V vs i9 extreme, similar power usage and die size but vastly different clock rates and process.

 

Ryzen is getting double the clocks out of the same process used for GPUs.  While this primarily comes down to design, it was also likely required that the dies be smaller for better yields.

 

 

Clocks have nothing to do with error propagation in the silicon due to nodes.  I guess we can always rewrite how chips are designed and manufactured as we talk about this though lol *sarcasm*

 

You did have one thing right though clocks are based on design, but only 50% of that even correct.  clocks are based on Voltage and leakage too which both of those are based on design and node.

 

So what you do  want to talk about error propagation or chip design for higher clocks?  Cause two different things.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Razor01 said:

 

 

Clocks have nothing to do with error propagation in the silicon due to nodes.  I guess we can always rewrite how chips are designed and manufactured as we talk about this though lol *sarcasm*

Failure modes are different given a different frequency and voltage range.

Link to comment
Share on other sites

Link to post
Share on other sites

400mhz better than my 1700 @ 1.4v under water.  That's still pretty good for a minor node change.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, KarathKasun said:

Failure modes are different given a different frequency and voltage range.

 

 

LOL so you want to talk about binning and yield curves based on bins?  Sorry that isn't what we are talking about it.  Because yield curves, are a bell curve, and then you bin those chips to get the different variants out of the total lot of chips, those have nothing to do with base yields.  Bins are there to separate base yields into different product lines.  Hence why, lets take Ryzen for example.  the CCX modules in a r5 or r6 will have bins that account with lower frequencies.  R7 will take the chips that have the highest frequencies.

 

Now lets take it a step further, those chip that reach higher frequencies must stay in a certain TDP, that is also looked into.  If the voltage needs are too high for a certain frequency for a certain binned chip, that chip is then demoted to a lower part even if it can hit the frequency number. 


A clear example of this is Intel S, T, other suffixes too chips (changed from gen to gen), Those chips are binned for voltage, frequency and power draw.  You can get the same chip in a K SKU, but when overclocking the K SKU should be able to do more then the others.  Intel uses those other SKU's in different form factors for this specific reason.

 

Prior to binning ever been done for chips, chip manufacturers just validated the frequencies and power draw and voltages based on all their chips as one lot.  This was like 20 years ago or more.  But this isn't how its done today.  As explained above this is how they are doing it now for CPU's.  GPU too but at a much lesser extent.

 

Now if you are going try to allude to the fact that chip size has something to do with clock speed (before we even go there), that is incorrect, because chip size by itself does shit to clock speed.  How ever it does impact on chip power usage.  So at times yeah clocks have to be dialed down just increase the TDP is pushed up too high.  Rare this happens.  Most of the time the chip designer already knows what the TDP is going to be for all their parts before manufacturing.

Link to comment
Share on other sites

Link to post
Share on other sites

It's sad to see so much hate ryzing over Ryzen.

It's a very competent processor, and it's really doing well for almost anyone's use.

 

I run a 120hz screen and my Xeon can feed it all those frames in CSGO at all times, so people dissing single thread IPC of Ryzen should educate themselves first, 60/120/144/240 hz screens.

 

You think a person buying a 240hz screen will buy AMD? well they might do but they are throwing almost 1k at a monitor! so they have no qualms buying intel's latest offering, better performer at what cost??

 

Shhh.

Intel Xeon E5640 4510mhz 1.10v-1.42v (offset) - C states on (◣_◢) 16GB 2x4 1x8 1296mhz CL7 (◣_◢) ASUS P6X58DE (◣_◢) Radeon R9 Fury Sapphire Nitro (◣_◢) 500GB HDD x2 1TB HDD x2 (RAID) Intel 480GB SSD (◣_◢) NZXT S340 (◣_◢) 130hz VS VX2268WM
Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Razor01 said:

 

 

LOL so you want to talk about binning and yield curves based on bins?  Sorry that isn't what we are talking about it.  Because yield curves, are a bell curve, and then you bin those chips to get the different variants out of the total lot of chips, those have nothing to do with base yields.  Bins are there to separate base yields into different product lines.  Hence why, lets take Ryzen for example.  the CCX modules in a r5 or r6 will have bins that account with lower frequencies.  R7 will take the chips that have the highest frequencies.

 

Now lets take it a step further, those chip that reach higher frequencies must stay in a certain TDP, that is also looked into.  If the voltage needs are too high for a certain frequency for a certain binned chip, that chip is then demoted to a lower part even if it can hit the frequency number. 

 

Now if you are trying to allude to the fact that chip size has something to do with clock speed, that is utter BS, because chip size by itself does shit to clock speed.  How ever it does impact on chip power usage.  So at times yeah clocks have to be dialed down just increase the TDP is pushed up too high.  Rare this happens.  Most of the time the chip designer already knows what the TDP is going to be for all their parts before manufacturing.

You generally need more voltage to maintain a large clock mesh because of resistive losses, or you implement multiple clock meshes for different parts which also drives up power consumption.  There are other problems with scaling up die size as well.  Its not just a case of "lets max out all the sliders".

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Razor01 said:

Only if the node can't support manufacturable yields for single large dies to begin with.  Look we have much larger GPU's with higher transistor counts on GF14nm than the current CCX modules of Ryzen.  CCX modules and yields in this sense makes no sense lol.  There were other reasons which are more pertinent for AMD to make such a design change.  You can't look at yields dropping off a cliff because of die size in this context.  Vega is what 2 times the die size of a CCX module?  Are we thinking its getting only 50% of the yields of a CCX module?  if a CCX module is getting 100% yields does that mean Vega is at 50% yields?  that is unmanufacturable for Vega, this is a mass produced product.  @ 50% yields with lets say around 6k per wafer, they would only be getting around 100 chips per wafer, that is both Vega 56 and 64, 60 bucks per chip + interposer + ram, we are looking at a package cost of 250 bucks for AMD, too much because AMD needs to make at least 30% profit when selling these right (that is bare minimum, they are doing that for Polaris, Vega I hope they are looking at 50%)? 

 

Had a discussion with Volta and Titan V about this where people said nV might be just getting Volta out and making a loss because of poor yields lol on a 12nm process, which made no sense.  Where I stated they can roll out a second tier Quadro at higher prices if that was the case.  Quadro Volta was released a week after my comments.  Yeah now we can say yields of fully functional V100 parts are much higher than 60%, they have to be.  And this chip is how much bigger than any current generation chips?  Double the size.  But Volta, V100 is made with redundant parts where most of the errors will fall in, the SM units, it has 4 extra SM's that are redundant specific to increase chip yields.  If an error falls in a part of the chip that can't be fixed, like in the IO, they will shut that off, like we saw with Titan V, one memory bank was shut off.  If it falls in a control silicon and its not fixable, the chip is gone.  But we have to look at what takes up most of the die space in a chip.  A GPU its the SM units, SM units for Volta is going take up 70% of the die (if we look at Pascal's die shot, it can be higher % for SM's and die space for Volta), 15% of the die for control silicon and 15% for the bus.  Since 12nm is an off shoot of 16nm with less layers, its yields are going to be better than that of 16nm for a chip of similar sizes.  nV was anticipating at least getting 70% yields with Volta's die size and they got it. 

 

Back to CPU's and yields, after risk production they know what the yields are going to be for chips, actually even before risk production they have a good idea based on target die size and node. 

Nodes are getting harder & harder, but you're looking too much at current tech and missing my point about the R&D side of the equation for AMD. Chiplet approaches are coming because yields are going to be more & more trouble with each passing generation, while we're also reaching points where certain parts of SoCs really don't need to be shrunk to a new node. For future designs, they'd be in the R&D phase for "what can we get away with splitting off the central SoC?". 

 

While GloFo's 14nm node appears to be one of the best yielding nodes of the last decade, AMD can't assume that case is always going to happen. AMD, as a fabless manufacturer in both CPUs & GPUs while not the market dominant in either, takes on massive amounts of risk due to the Nodes themselves. Smaller designs drastically reduce their risk to another bad Node happening. (Where's GloFo's 20nm? :) ) Further, the current Epyc approach, which yields much better than a monolithic die, does use extra die space as a result. That means there is currently a trade-off of Yield for Die Space, which is better at the front-end of a node, but worse as time goes on an yields improve. AMD has an interest in removing as much of that overlap as possible. 

 

As for Nvidia, I believe the Titan V was yielding 2 dies per wafer during the early stages in 2017? Their customized node at TSMC has improved a lot, but it still took a while.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, KarathKasun said:

You generally need more voltage to maintain a large clock mesh because of resistive losses, or you implement multiple clock meshes for different parts which also drives up power consumption.  There are other problems with scaling up die size as well.  Its not just a case of "lets max out all the sliders".

 

today's chips don't use the same frequencies across an entire silicon anyways.  That is all looked into in design.  These designers aren't monkeys they know what they are doing.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Taf the Ghost said:

Nodes are getting harder & harder, but you're looking too much at current tech and missing my point about the R&D side of the equation for AMD. Chiplet approaches are coming because yields are going to be more & more trouble with each passing generation, while we're also reaching points where certain parts of SoCs really don't need to be shrunk to a new node. For future designs, they'd be in the R&D phase for "what can we get away with splitting off the central SoC?". 

 

While GloFo's 14nm node appears to be one of the best yielding nodes of the last decade, AMD can't assume that case is always going to happen. AMD, as a fabless manufacturer in both CPUs & GPUs while not the market dominant in either, takes on massive amounts of risk due to the Nodes themselves. Smaller designs drastically reduce their risk to another bad Node happening. (Where's GloFo's 20nm? :) ) Further, the current Epyc approach, which yields much better than a monolithic die, does use extra die space as a result. That means there is currently a trade-off of Yield for Die Space, which is better at the front-end of a node, but worse as time goes on an yields improve. AMD has an interest in removing as much of that overlap as possible. 

 

As for Nvidia, I believe the Titan V was yielding 2 dies per wafer during the early stages in 2017? Their customized node at TSMC has improved a lot, but it still took a while.

I'm not going to argue with you when I'm getting info right from people that make chips man, if you want to believe crazy theory youtubers, that it up to you that is not how things work in the real world ;)

 

On Volta, check tom's hardware, they did an interview with the lead tesla guy at nV, they were making many more than 2 chips per wafer at the start ;).  What Jensen stated on stage was a fully working all SM chip, which 4 of the SM were never planned to being used in the full chip.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×