Jump to content

CPU binning question

So as I understand it, Intel (and im sure other chip makers as well), make "one" chip. As in, all 1150 socket chips come from the same silicone wafers, the ones near the outermost edge are inherently broken as they are not even fully formed (circles and squares, they don't really mix). The ones that perform the best are then "i7's" the middle are turned into i5 via turning off cache, turning off certain features, deactivating cores and so on. If this is indeed true, would anyone know how to go about trying to figure out how Intel binned them, and if that is determined by statistics as well as market demand. 

 

I have a class project I need to do for a statistics class I am in, doesn't really need to be statistics based in nature, but if this is indeed how Intel makes chips, I AM SURE there is statistics being employed and I am a bit curious to see if I can figure it out to some small extent. If not I will just talk about market demand and so on, not too bad.

 

My main questions, is this actually how it happens, and if so, any idea where I should be trying to look in order to find some documentation from Intel or others that discuss this process?

 

This is a pretty interesting CPU question, hopefully LTT community can come through!

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

There is no way to determine how it was binned. The best way is by large scale sampling or by buying a golden chip from someone else.

You could instead try to express it as probability as silicon quality is usually varied inversely as the further you are from the center of the platter the worse the chip.

AMD 8 cores are all the same, I'm not sure if i7, i5, and E3 lot are physically different. Sometimes they are binned based on demand, I know at least AMD had done this when the demand for a lower sku was higher and they soft-offed cores.

Error: 410

Link to comment
Share on other sites

Link to post
Share on other sites

Yea, that is my current question. I guess even if Intel isn't binning like this, I suppose AMD is correct? Like you said, all 8 cores are the same. And even the old Phenom x4's and x3's were as well because some mobos had the ability to make a x3 into a x4.. Thus the cores are OBVIOUSLY there. Just turned off.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Yea, that is my current question. I guess even if Intel isn't binning like this, I suppose AMD is correct? Like you said, all 8 cores are the same. And even the old Phenom x4's and x3's were as well because some mobos had the ability to make a x3 into a x4.. Thus the cores are OBVIOUSLY there. Just turned off.

Yes, lower binned ones are put into the lower clocked tier and defective cores are now lazered off (I believe) on AMD's side.

Intel might bin i5's into i3's in a similar way, and still bin frequencies across chips the same as AMD.

Error: 410

Link to comment
Share on other sites

Link to post
Share on other sites

I think that the extreme editions are from the core 

#KilledMyWife 

LTT's Resident Black Star

I should get an award for still being here at this point 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel, AMD, NVIDIA... any microprocessor company does this process.  Modern chips have billions of transistors, there is no way Intel could design 80 different CPUs every single year.

 

Basically, yes all chips are made as i7, and depending on the performance and capabilities, Intel chooses an arbitrary clock speed and voltage to set them all to as their default setting, balancing between performance, and yields.  The higher speed they choose, it will be higher performance yes but less chips will be able to achieve that, and so with increased scarcity comes higher prices for those chips since they would have to go through many more samples to find chips of that quality.  The lower speed they choose, the greater the percentage of the chips they produce can do that speed, and so they don't have to sell the chips for as much.  But of course, they will be lower performing.  So, they balance performance and yields as best they can.  Those CPUs that don't make the cut, are the ones that create all the other SKUs... As you can see there are a ton of different i5 chips that are basically all the same, except one is 2.8GHz, the next is 3.0GHz, the next is 3.1GHz, the next 3.3GHz... etc.  These ones couldn't achieve 3.4GHz at the voltage Intel chose for their i5-4670 spec.

 

Also the classing is based on defects, essentially the defective i7s are sold off at a discount as i5s and i3s (if a core or two doesn't function properly).  That's where all the odd SKUs come from, like the one random i5 with no integrated graphics, and things like that.

 

If the binning process didn't take place, then not only would there be nothing but expensive i7 chips, but those chips would be even more expensive, because every chip that didn't make the cut would be tossed and if only 1 out of 25 were capable of the i7 spec, then each of those chips would have to be priced to pay for not only itself but the other 24 chips that didn't make it.  But with the binning process Intel can still sell most of those other 24 chips off at a discounted price, helping to pay for their silicon so the i7 prices don't have to carry the full weight, as well as filling up the budget segments of the market.  Everybody wins.

 

Unfortunately Intel doesn't release information on the results of their die fabrication, how many chips out of each batch are capable of this or that, or defective, how many become i5s and i3s, or how many were purposely sabotaged to satisfy demand, as I highly doubt the ratio of chips that come out of the binning process as i7s to compared to the number that come out as i5s, happens to exactly match how many i7s people buy compared to how many i5s are bought.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel, AMD, NVIDIA... any microprocessor company does this process.  Modern chips have billions of transistors, there is no way Intel could design 80 different CPUs every single year.

 

Basically, yes all chips are made as i7, and depending on the performance and capabilities, Intel chooses an arbitrary clock speed and voltage to set them all to as their default setting, balancing between performance, and yields.  The higher speed they choose, it will be higher performance yes but less chips will be able to achieve that, and so with increased scarcity comes higher prices for those chips since they would have to go through many more samples to find chips of that quality.  The lower speed they choose, the greater the percentage of the chips they produce can do that speed, and so they don't have to sell the chips for as much.  But of course, they will be lower performing.  So, they balance performance and yields as best they can.  Those CPUs that don't make the cut, are the ones that create all the other SKUs... As you can see there are a ton of different i5 chips that are basically all the same, except one is 2.8GHz, the next is 3.0GHz, the next is 3.1GHz, the next 3.3GHz... etc.  These ones couldn't achieve 3.4GHz at the voltage Intel chose for their i5-4670 spec.

 

Also the classing is based on defects, essentially the defective i7s are sold off at a discount as i5s and i3s (if a core or two doesn't function properly).  That's where all the odd SKUs come from, like the one random i5 with no integrated graphics, and things like that.

 

If the binning process didn't take place, then not only would there be nothing but expensive i7 chips, but those chips would be even more expensive, because every chip that didn't make the cut would be tossed, but this way Intel can still sell them off at a discounted price, helping to pay for their silicon as well as filling up the budget segments of the market.  Everybody wins.

 

Unfortunately Intel doesn't release information on the results of their die fabrication, how many chips out of each batch are capable of this or that, or defective, how many become i5s and i3s, or how many were purposely sabotaged to satisfy demand, as I highly doubt the ratio of chips that come out of the binning process as i7s to compared to the number that come out as i5s, happens to exactly match how many i7s people buy compared to how many i5s are bought.

That's very interesting, but what about Xeon processors? They are more geared towards professional use/workstation/server use so they are required to be able to handle 24/7 usage and also require to work perfectly 100% of the time right? So are they made separately or via the i7's itself? 

Link to comment
Share on other sites

Link to post
Share on other sites

That's very interesting, but what about Xeon processors? They are more geared towards professional use/workstation/server use so they are required to be able to handle 24/7 usage and also require to work perfectly 100% of the time right? So are they made separately or via the i7's itself? 

 

Intel does 2 main designs each year for the desktop market, the LGA 1150 design and the LGA 2011 design.  There are Xeons for both platforms, and they also come out of the same process as the consumer SKUs.  The main difference is pretty much the ECC memory support (which is simply disabled on chips that end up as consumer models), there's nothing really special about them that makes them good for 100% use; CPUs in general don't really fail from "wear" or overuse.  But they are designed to work in tandem with other server-grade hardware such as ECC supporting memory and motherboards, server power supplies, reliable and redunant storage etc. to ensure 100% up-time, redundancy, and error protection.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel does 2 main designs each year for the desktop market, the LGA 1150 design and the LGA 2011 design.  There are Xeons for both platforms, and they also come out of the same process as the consumer SKUs.  The main difference is pretty much the ECC memory support (which is simply disabled on chips that end up as consumer models), there's nothing really special about them that makes them good for 100% use; CPUs in general don't really fail from "wear" or overuse.  But they are designed to work in tandem with other server-grade hardware such as ECC supporting memory and motherboards, server power supplies, reliable and redunant storage etc. to ensure 100% up-time, redundancy, and error protection.

Alright got it. Thanks for the clarification.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel does 2 main designs each year for the desktop market, the LGA 1150 design and the LGA 2011 design.  There are Xeons for both platforms, and they also come out of the same process as the consumer SKUs.  The main difference is pretty much the ECC memory support (which is simply disabled on chips that end up as consumer models), there's nothing really special about them that makes them good for 100% use; CPUs in general don't really fail from "wear" or overuse.  But they are designed to work in tandem with other server-grade hardware such as ECC supporting memory and motherboards, server power supplies, reliable and redunant storage etc. to ensure 100% up-time, redundancy, and error protection.

Dude, thanks! This is exactly what I thought, especially everything in your previous post, but I never really had anyone word for word back me up, and you did a pretty sweet job. I am a bit sad Intel doesn't release any of the numbers as I was going to try and focus in on that a bit, but honestly, not like I was going to do hours worth of statistical analysis on it anyways...

 

But thanks for the great info! I will definitely be using a lot of that for my paper.

 

Also to answer chirag.rh about server stuff, I would assume the XEON's are just the "better" i7's for instance. I assume (might be wrong) that they will run at the nominal xeon speed, but do it with less volts, thus lasting longer with less strain on the chip? Might not be right, this is probably easy to find out though, just look at what xeons run at volts wise. I guess I will have to look into that and put it in my paper as well :).

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

What I can't understand is how come there are 4770Ks that can't go beyond 4.2GHz but there are 4670Ks that can go up to 4.8GHz. Shouldn't those i5s be i7s?

Why do i always get blue screens? Why not a red one for a change?

 

 

Spoiler

  CPU: 2920x  GPU: Sapphire HD 7950 Vapor X  MOBO: X399 Taichi  RAM: 4x 8GB Trident Z RGN 3200/14  CASE: 900D  OS SSD: Samsung 960 Evo 512GB  Storage: 20TB NAS  PSU: Corsair RM1000i  CPU COOLER: NH-U14S TR4 OS: Arch Linux Keyboard: Ducky Shine 3 TKL  Mouse: MX Master 2S Headphones: BD DT 770 PRO 250 Ohm

Link to comment
Share on other sites

Link to post
Share on other sites

What I can't understand is how come there are 4770Ks that can't go beyond 4.2GHz but there are 4670Ks that can go up to 4.8GHz. Shouldn't those i5s be i7s?

In some cases the internal thermal interface material is poorly applied, other times its just the silicon lottery.

Intel 3570k @ 4.4 GHz |Asus Sabertooth Z77 |EVGA GTX 660 Ti FTW |Kingston HyperX Beast 16 Gb DDR3 1866 (2x8Gb)


|Samsung 840 250 GB |Western Digital Green 2TB 2x |Cooler Master 850w 80+ Gold |Custom Water Cooling Loop |Noctua NF-F12 4x
|Noctua NF-A14 3x |Corsair Carbide 500R (White) |Corsair K95 |Razer Mamba |Razer Megalodon |Samsung SyncMaster T220 2x Computer Bucket List   Greatest Thread Ever   WAN Show Drinking Game  GPU Buyers Guide
Link to comment
Share on other sites

Link to post
Share on other sites

What I can't understand is how come there are 4770Ks that can't go beyond 4.2GHz but there are 4670Ks that can go up to 4.8GHz. Shouldn't those i5s be i7s?

Intel does not endorse overclocking under the typical sale. As long as the CPU reaches the parameters and is stable with those parameters met, it can be passed at that level. This limiting parameter for i5 vs i7 is usually hyper threading and the stress it generates. This is where the silicon lottery comes into play - you can get really good chips or really horrible chips for overclocking. 

Link to comment
Share on other sites

Link to post
Share on other sites

Back at Athlon stages of AMD any 4 cored athlons that fails on 3 or 4th core is disabled and set as a Athlon Dual core. Thats why some people with Athlons that are only dual core are lucky enough to actually unlock the other 2.

| CPU: INTEL i5 6600k @ 4.6Ghz @ 1.328v | Motherboard: ASUS Z170-AR | Ram: G.SKILL 2x8GB 2400Mhz | CPU Cooler : Corsair H100i V2

| GPU: GIGABYTE GTX980Ti G1 GAMING | SSD: SAMSUNG 840 EVO 250GB  Storage: WD 1TB GREEN | OS: Windows 10 Pro 64bit | PSU: FSP 650W AURUM S |

<<<<< BLK-Phant0m >>>>>

 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel, AMD, NVIDIA... any microprocessor company does this process.  Modern chips have billions of transistors, there is no way Intel could design 80 different CPUs every single year.

 

Basically, yes all chips are made as i7, and depending on the performance and capabilities, Intel chooses an arbitrary clock speed and voltage to set them all to as their default setting, balancing between performance, and yields.  The higher speed they choose, it will be higher performance yes but less chips will be able to achieve that, and so with increased scarcity comes higher prices for those chips since they would have to go through many more samples to find chips of that quality.  The lower speed they choose, the greater the percentage of the chips they produce can do that speed, and so they don't have to sell the chips for as much.  But of course, they will be lower performing.  So, they balance performance and yields as best they can.  Those CPUs that don't make the cut, are the ones that create all the other SKUs... As you can see there are a ton of different i5 chips that are basically all the same, except one is 2.8GHz, the next is 3.0GHz, the next is 3.1GHz, the next 3.3GHz... etc.  These ones couldn't achieve 3.4GHz at the voltage Intel chose for their i5-4670 spec.

 

Also the classing is based on defects, essentially the defective i7s are sold off at a discount as i5s and i3s (if a core or two doesn't function properly).  That's where all the odd SKUs come from, like the one random i5 with no integrated graphics, and things like that.

 

If the binning process didn't take place, then not only would there be nothing but expensive i7 chips, but those chips would be even more expensive, because every chip that didn't make the cut would be tossed and if only 1 out of 25 were capable of the i7 spec, then each of those chips would have to be priced to pay for not only itself but the other 24 chips that didn't make it.  But with the binning process Intel can still sell most of those other 24 chips off at a discounted price, helping to pay for their silicon so the i7 prices don't have to carry the full weight, as well as filling up the budget segments of the market.  Everybody wins.

 

Unfortunately Intel doesn't release information on the results of their die fabrication, how many chips out of each batch are capable of this or that, or defective, how many become i5s and i3s, or how many were purposely sabotaged to satisfy demand, as I highly doubt the ratio of chips that come out of the binning process as i7s to compared to the number that come out as i5s, happens to exactly match how many i7s people buy compared to how many i5s are bought.

Isnt the i7 die size bigger than i5 and i3(and pentium and celeron)?Seen from some website.
Link to comment
Share on other sites

Link to post
Share on other sites

I believe its the same with GPU's. ASIC determines the quality of the chip and it can be checked with GPU-Z.

Some gpu's can unlock more cuda cores etc.

4.2GHZ VALIDATIONGALLERYMost expensive software you own (clicky thread)

 Intel Core i5 760 2.8GHZ @3.3ghz(4C),Asus P7H55/USB3,Antec TP550 80+bronze,Gigabyte GTX 570 SOC@955/2120 1050MV,windows 3.1,8GB DDR3 1333MHZ @~1450mhz,,500gb hdd, 160gb 
Link to comment
Share on other sites

Link to post
Share on other sites

Isnt the i7 die size bigger than i5 and i3(and pentium and celeron)?Seen from some website.

 

Not that I know of, except for the LGA 2011 i7 chips.  It's possible though that some of the lower end chips like the i3s and pentiums are scaled up from their mobile design and adapted for LGA 1150 or something, I haven't looked into it.  But, I don't think so.

Link to comment
Share on other sites

Link to post
Share on other sites

What I can't understand is how come there are 4770Ks that can't go beyond 4.2GHz but there are 4670Ks that can go up to 4.8GHz. Shouldn't those i5s be i7s?

If you have an i7 and turn HT off, you usually can get higher clocks, which is related to the issue you are asking about. The particular chip might run great at i5 spec, but for whatever reason with HT turned on it might have had issues, thus can't be an i7... Might be a good chip, just had an issue that kept it from being i7 good.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Not that I know of, except for the LGA 2011 i7 chips.  It's possible though that some of the lower end chips like the i3s and pentiums are scaled up from their mobile design and adapted for LGA 1150 or something, I haven't looked into it.  But, I don't think so.

The i5 Ivy bridge are 133mm^2 or 160mm^2,but i7(socket H ivy) is only 160mm^2.i3 are not stated.

Link to comment
Share on other sites

Link to post
Share on other sites

Also to answer chirag.rh about server stuff, I would assume the XEON's are just the "better" i7's for instance. I assume (might be wrong) that they will run at the nominal xeon speed, but do it with less volts, thus lasting longer with less strain on the chip? Might not be right, this is probably easy to find out though, just look at what xeons run at volts wise. I guess I will have to look into that and put it in my paper as well :).

Cool, makes sense I guess. Thanks for the clarification!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×