Jump to content

Core i3 is now a Core i7: Intel increases core counts

NumLock21
On 10/12/2019 at 3:44 PM, Opencircuit74 said:

*Looks at my i7-6700k with a measly 4 cores and 8 threads*  "You got this buddy, I don't have the budget to replace you."

I have two 6700K builds and I feel the same way. 

"You're my dearest friend & my love. You lit my path through darkness & I'll stand with you...to whatever end." -Leliana (DAO).

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Anomnomnomaly said:

Why add HT... because AMD has HT on ALL CPU's and all are also unlocked. Intel has no real choice but to do this to stay competitive and they also need to seriously consider dropping the xxxxk varieties too.

No, yeah, no, I get that. What I meant was that I was under the impression the low-priced SKUs didn't have HT due to some cost saving measures on their end. But from the looks of it, only i5 lacks Hyper Threading, if I'm understanding correctly? Which just sounds ludicrous.

Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/12/2019 at 3:13 PM, The1Dickens said:

I feel like Intel is just loosing control of their own product lines. If the i3 has HT, does that mean the i5 will? Why add HT to all the CPUs suddenly, when just last generation they removed it from all but the new i9 series? What happens to the 4c i5 when you can get a 4c i3, if both have HT? They spent that last few generations flooding the market with products for the only reason of charging obscene prices that now their own product lines are likely going to be cannibalizing each other.

You do realize that the i3 has had HT forever right? The i5 is the one that has the nerfed HT from the otherwise equal i7.

 

https://ark.intel.com/content/www/us/en/ark/products/97455/intel-core-i3-7100-processor-3m-cache-3-90-ghz.html

 

Only the 9th generation for some reason Intel decided to introduce the i9 and bump everything around.

 

What is likely happening is that the "i3" is being upgraded to what is otherwise the baseline enterprise configuration. 

 

https://www.intel.ca/content/www/ca/en/computer-upgrades/pc-upgrades/sipp-intel-stable-image-platform-program.html

Note there are no i3's on that page.

 

The stable platform just means more or less that there won't be any changes (cpu configuration, video, network, etc) for 12 months, so that businesses can bulk order machines and not get 20 slightly different versions. As it is, and I'm not complaining that much, every Dell ordered by the office ends up coming in two configurations. One configuration sometimes comes with the fingerprint and smartcard features, the other configuration lacks those features. On previous models, sometimes it's the camera that is missing.

 

Anyway, my guess here is that if the data is true, Intel is probably axing the differentiation between the i3/i5/i7 features, and probably instead binning the same chip for all segments. So the thing to look for will be what is missing from the i3 that the i5 has and i7 has. Maybe the i9 can operate all cores and HT at maximum speed, and the i3 can only operate one core at maximum speed, or maybe the iGPU part is better on the i3's (where it's more likely to be used.)

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Kisai said:

--Lots of info--

I was not aware, no. I picked that up from the previous comments, though.

 

The last bit is interesting, though. So, if I'm understanding correctly, you're saying that your theory is a 4c8t@3.3GHz will be the same at either i3, i5 or i7 levels, but the differential will be i3 lacks some feature, i5 has a low-tier feature, and i7 will have the low-tier, plus a high-tier feature, and i9 will just be all the features, all the cores, all the hyper threads (but only support 1 socket)?

Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Dang, Intel Core i7-10700(K) - mustbuy.

I hoped that the Ryzen 3600X would be 8-core earlier this year and that was dissapointment a bit. Hoping Intel does not go same way.

I edit my posts more often than not

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, The1Dickens said:

I was not aware, no. I picked that up from the previous comments, though.

 

The last bit is interesting, though. So, if I'm understanding correctly, you're saying that your theory is a 4c8t@3.3GHz will be the same at either i3, i5 or i7 levels, but the differential will be i3 lacks some feature, i5 has a low-tier feature, and i7 will have the low-tier, plus a high-tier feature, and i9 will just be all the features, all the cores, all the hyper threads (but only support 1 socket)?

I think Intel will just bin the same die for each tier. Like the i9's will be the premium product, the i7 will be the standard desktop configuration, the i5 will be the SFF configuration, and the i3 will be the ITX/NUC configuration. They will bin the bad GPU dies on the i9 and i7's since they are not typically used, and bin the cpu dies that don't operate at full speed on the i5 and i3's. 

 

At least that's how I speculate this will go. Up till recently the i3 has always been "half an i7" except in the mobile space where the i7's were sometimes dual cores too.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/13/2019 at 12:19 AM, mr moose said:

if it ain't broke don't fix it.  Even though they have made what appears to be very little revision on the surface (for more than 4 years), they're still releasing CPU's that are incrementally better and keeping up.

 

 

I think the point is that the motherboards could have been the same (or at least inter-compatible) for more than 4 years.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, thechinchinsong said:

I think the point is that the motherboards could have been the same (or at least inter-compatible) for more than 4 years.

Except where power delivery is concerned.  Not all motherboards have the same size and quality power delivery,  so by allowing cpu's to work on older boards (which is quite possible) it would require them being gimped or going over every board manufactured and certifying them to be good enough.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

I think Intel will just bin the same die for each tier. Like the i9's will be the premium product, the i7 will be the standard desktop configuration, the i5 will be the SFF configuration, and the i3 will be the ITX/NUC configuration. They will bin the bad GPU dies on the i9 and i7's since they are not typically used, and bin the cpu dies that don't operate at full speed on the i5 and i3's. 

 

At least that's how I speculate this will go. Up till recently the i3 has always been "half an i7" except in the mobile space where the i7's were sometimes dual cores too.

 

Ah, got it. Thanks, that makes a lot of sense. That would be a good direction.

Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, thechinchinsong said:

I think the point is that the motherboards could have been the same (or at least inter-compatible) for more than 4 years.

It's not been possible to this ever since the memory controller moved to the CPU die. Now theoretically, Intel could have just made one memory controller die and the die could be swapped out with every CPU upgrade, thus allowing faster or even different memory to be used. But that turns it back into a northbridge.

 

The reason all the older CPU's with the north bridge chips could have multiple generations of CPU's was because fundamentally nothing changed between the north bridge and the rest of the board. But then if you recall, you typically had to futz with clock ratios to get it to even work right. So in most cases it ultimately didn't matter, 99% of MB's only ever used the CPU they were installed with, and we'd likely have all CPU's soldered to the motherboard, at only a single speed if it wasn't for the need for HEDT/WS and HTPC/ITX/NUC segments. Heck, the only reason the south bridge (PCH) even still exists is because SATA and USB were not originally on the PCIe bus, Now Thunderbolt (USB4) and NVme drives are on the PCIe bus, so they can literately drop the entire PCH and just have TB ports on the MB, and drop all the SATA ports. If a motherboard manufacturer still wishes to have USB1-3 or SATA AHCI ports they would be moved to a separate PCIe-connected chip like typical ASMedia chips are presently. We've gone down this road before with the Super I/O (Serial, Parallel and IDE/ATA.) 

 

So when that happens, and mark my words, it's likely going to happen, Intel will put more lanes on the CPU and then direct MB manufacturers to decide if they need the PCH. This is has been happening since the Core 5th generation chips. I believe all that the PCH actually does now is the exist as the iGPU's i/o ports and an arbitrator for the USB and SATA ports.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, The1Dickens said:

So then they have no reason to not have Hyper Threading on the i5? Maybe its just me, but I'm having a hard time thinking of a person who would buy an i5 over a lower priced i3 if this turns out to be true. Or, not spend the extra few bucks and get the i7 and have Hyper Threading on all the cores you are paying for. Their pricing tree must look incredibly narrow, considering their top i9 is under 1K this generation.

Well the i5 will always have the physical core advantage, which some people can utilise more(at least in theory) 

 

The argument has always stood that the i5 is awkward, you can step down to the i3 and still have the same number of "cores" or just spend a little extra on the i7, if the core count predictions are right then this is just exaggerated.

You have to think from Intel's perspective, every cpu they make is designed to be an i7 but because of imperfections they end up being i3s, i5s or even lower skus.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ImNotThere said:

You have to think from Intel's perspective, every cpu they make is designed to be an i7 but because of imperfections they end up being i3s, i5s or even lower skus.

They have different dies for the different core counts. For example, 8350k with 4 cores is 126mm^2, 8086k with 6 cores is 149mm^2 (from wikichip).

 

AMD's approach is a bit different, in that they only make 8 core CCDs now, and some may be limited to 6. Maybe with the large L3 cache the space savings from making a dedicated 6 core CCD are not worth it in their view.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, porina said:

They have different dies for the different core counts. For example, 8350k with 4 cores is 126mm^2, 8086k with 6 cores is 149mm^2 (from wikichip).

 

AMD's approach is a bit different, in that they only make 8 core CCDs now, and some may be limited to 6. Maybe with the large L3 cache the space savings from making a dedicated 6 core CCD are not worth it in their view.

Using whole CCX unit actually helps with thermals when you have disabled logic inside them opposed to chip being cut to exact size it needs to be. 8 core CCX with only 6 functioning cores will have lower thermals while having larger surface area of the chip, making more contact with the IHS, thus actually cooling better. And it's cheaper for AMD to just churn out whole CCX units and position them on models scale depending on quality than making dedicated monolith chips. Personally, the decision AMD made to go all out chiplet design has been one of smartest moves in ages. They can almost infinitely scale it and it's much cheaper to maintain, manage and produce.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

Using whole CCX unit actually helps with thermals when you have disabled logic inside them opposed to chip being cut to exact size it needs to be. 8 core CCX with only 6 functioning cores will have lower thermals while having larger surface area of the chip, making more contact with the IHS, thus actually cooling better. And it's cheaper for AMD to just churn out whole CCX units and position them on models scale depending on quality than making dedicated monolith chips. Personally, the decision AMD made to go all out chiplet design has been one of smartest moves in ages. They can almost infinitely scale it and it's much cheaper to maintain, manage and produce.

From a management point of view, a single CCD design is a lot easier to handle. You don't need to guess how many 6 vs 8 cores you have to make, and can allocate between them accordingly. Actually, looking at how the 8 cores are laid out, AMD would have trouble making an efficient dedicated 6 core version because of the CCX structure. At the least it would have to be very different from the current 8 core version.

 

Intel's approach works for them it seems. By making it only as big as it needs to be, they can make about 18% more quad cores than 6 cores per wafer.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

From a management point of view, a single CCD design is a lot easier to handle. You don't need to guess how many 6 vs 8 cores you have to make, and can allocate between them accordingly. Actually, looking at how the 8 cores are laid out, AMD would have trouble making an efficient dedicated 6 core version because of the CCX structure. At the least it would have to be very different from the current 8 core version.

 

Intel's approach works for them it seems. By making it only as big as it needs to be, they can make about 18% more quad cores than 6 cores per wafer.

Assuming they get 100% yield on the waffer. Which is never the case. Plus foundry needs to run different blueprints for the chips in production. For AMD they just churn out all the same ones and only later "surgically" adapt them if they appear to not qualify for top of the line use. Which raises yields dramatically and drives costs down.

Link to comment
Share on other sites

Link to post
Share on other sites

And it only took 10 years!

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, RejZoR said:

Assuming they get 100% yield on the waffer. Which is never the case. Plus foundry needs to run different blueprints for the chips in production. For AMD they just churn out all the same ones and only later "surgically" adapt them if they appear to not qualify for top of the line use. Which raises yields dramatically and drives costs down.

That it isn't 100% doesn't matter. Assuming same process it would be similar between them. Yield would swing more towards the smaller chip anyway. You still get around 18% (plus a bit for smaller die) more of quad cores than 6 cores per wafer.

 

I wonder how many 6 core Ryzens (or multiples thereof) might actually be fully working 8 cores cut down just to provide such?

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, porina said:

That it isn't 100% doesn't matter. Assuming same process it would be similar between them. Yield would swing more towards the smaller chip anyway. You still get around 18% (plus a bit for smaller die) more of quad cores than 6 cores per wafer.

 

I wonder how many 6 core Ryzens (or multiples thereof) might actually be fully working 8 cores cut down just to provide such?

But you're still churning out exactly the same chips through your entire product stack, from Ryzen 3 up to most badass Ryzen 9. Where Intel needs to purposely decide which they'll be making. Sure they can cut them down too to a degree, but it's harder with monolithic design compared to modular used by AMD.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, RejZoR said:

But you're still churning out exactly the same chips through your entire product stack, from Ryzen 3 up to most badass Ryzen 9. Where Intel needs to purposely decide which they'll be making. Sure they can cut them down too to a degree, but it's harder with monolithic design compared to modular used by AMD.

I acknowledged earlier that AMD's approach leads to manufacturing and planning simplicity, but it is a tradeoff against silicon area efficiency. AMD's approach works best for very high core counts, and is forward looking. But this doesn't negate the need for lower core count parts, and Intel's approach is more silicon efficient within the limits of the process they're stuck on.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, porina said:

But this doesn't negate the need for lower core count parts, and Intel's approach is more silicon efficient within the limits of the process they're stuck on.

indeed. the lowest profit margin parts are the 200GE for AMD 200GE or the 1200. where "half" or more than half of the chip is cut off. 

 

wouldnt be surprising if they started making 2 different dies once the single chiplets get large enough. 

 

why make a 8 or a theoretical 12/16 core chiplet when you can make 2 different ones? 

 

it makes lower skues more profitable. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, GoldenLag said:

indeed. the lowest profit margin parts are the 200GE for AMD 200GE or the 1200. where "half" or more than half of the chip is cut off. 

 

wouldnt be surprising if they started making 2 different dies once the single chiplets get large enough. 

 

why make a 8 or a theoretical 12/16 core chiplet when you can make 2 different ones? 

 

it makes lower skues more profitable. 

Or you keep up the low ends at high enough level still to use those 16 core chiplets. I know that's not what Intel would do, they'd cut down to HT powered dual core if they could for 2 more decades, but AMD would keep up with that. It's only natural that Ryzen 3 will eventually become a 8 core. And 12 core. And 16 core. With time. If timed well, it'll progress along with larger chiplets.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, mr moose said:

Except where power delivery is concerned.  Not all motherboards have the same size and quality power delivery,  so by allowing cpu's to work on older boards (which is quite possible) it would require them being gimped or going over every board manufactured and certifying them to be good enough.

 

 

Don't really agree with the power delivery argument except for the 8th and 9th gen i7/i9 processors. For the 8700k and 9900k, I can see there being power delivery problems for those but for all 4c/8t processors from since 6th gen core and even 6c/6t 8th/9th gen chips, it wouldn't be a problem.  There has been very clear and consistent TDP levels within Intel's lineup for years now and if a mobo can handle a 95w 6700k, then it should handle a 95w 7700k and all processors below TDP level. Intel themselves also rate the 8700k and 9900k at 95w tdp but we all know those processors take a better power delivery compared to 7th and 6th gen.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Kisai said:

It's not been possible to this ever since the memory controller moved to the CPU die. Now theoretically, Intel could have just made one memory controller die and the die could be swapped out with every CPU upgrade, thus allowing faster or even different memory to be used. But that turns it back into a northbridge.

 

The reason all the older CPU's with the north bridge chips could have multiple generations of CPU's was because fundamentally nothing changed between the north bridge and the rest of the board. But then if you recall, you typically had to futz with clock ratios to get it to even work right. So in most cases it ultimately didn't matter, 99% of MB's only ever used the CPU they were installed with, and we'd likely have all CPU's soldered to the motherboard, at only a single speed if it wasn't for the need for HEDT/WS and HTPC/ITX/NUC segments. Heck, the only reason the south bridge (PCH) even still exists is because SATA and USB were not originally on the PCIe bus, Now Thunderbolt (USB4) and NVme drives are on the PCIe bus, so they can literately drop the entire PCH and just have TB ports on the MB, and drop all the SATA ports. If a motherboard manufacturer still wishes to have USB1-3 or SATA AHCI ports they would be moved to a separate PCIe-connected chip like typical ASMedia chips are presently. We've gone down this road before with the Super I/O (Serial, Parallel and IDE/ATA.) 

 

So when that happens, and mark my words, it's likely going to happen, Intel will put more lanes on the CPU and then direct MB manufacturers to decide if they need the PCH. This is has been happening since the Core 5th generation chips. I believe all that the PCH actually does now is the exist as the iGPU's i/o ports and an arbitrator for the USB and SATA ports.

Sorry I'm not exactly sure how all of that means its not possible. There have literally been mods to support 7th and 8th gen (not sure on 9th gen) Intel CPUs on the z170 chipset. Its clear that Intel could have easily made support official.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×