Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Why aren't CPUs without integrated graphics much much faster than CPUs with integrated graphics.

 Share

Integrated graphics on a CPU takes up a shit ton of space. For example, on skylake...

 

77a.jpg

 

So why on earth do chips without graphics not have A. a crap ton more cache, B. more cores, C. lower TDP or D. are much... much cheaper (assuming base prices (not including binning) are generally related to how many dies they can fit on one large silicon sheet)?

 

Also, why in the hell do Intel include graphics on the higher end chips? I know that they're really all the same die and they're just binned, but honestly, you'd think they'd try to increase performance on the i9s perhaps by removing the graphics and adding more... useful things. I don't think that anyone would argue that they'd prefer a better CPU at the cost of integrated graphics and just having to buy a cheap $30 GPU.

 

EDIT: I'm aware that the modern extreme chips don't have integrated graphics, but still, who actually buys those things?

Link to comment
Share on other sites

Link to post
Share on other sites

To add more cache would make the CPU a lot more expensive. You're not paying for the space on the substrate your microprocessor takes up, you're paying for the guts. The iGPU is a just a nice bonus.

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k (won) - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

Linux Proliant ML150 G6:

Dual Xeon X5560 - 24GB ECC DDR3 - GTX 750 TI - old Seagate 1.5TB HDD - dark mode Ubuntu (and Win7, cuz why not)

 

How many watts do I need? Seasonic Focus thread, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, fasauceome said:

To add more cache would make the CPU a lot more expensive. You're not paying for the space on the substrate your microprocessor takes up, you're paying for the guts. The iGPU is a just a nice bonus.

How so? Everything on the chip is silicon. It doesn't matter what's put on the silicon, it all gets made in the same way. There's no special handling of cache sections that I know of.

Link to comment
Share on other sites

Link to post
Share on other sites

If you're not using the iGPU it's not producing any heat, so TDP won't be lower. It'll probably produce a neglibile amount of heat compared to the CPU anyway. You also can't just add more cores, memory etc. It'll take time, R&D and money, especially money. Your CPU will not become any cheaper by doing that, on the contrary.

 

Also iGPUs are pretty useful for testing and troubleshooting a system without a discreet GPU. One less component to worry about.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, tikker said:

If you're not using the iGPU it's not producing any heat, so TDP won't be lower. It'll probably produce a neglibile amount of heat compared to the CPU anyway. You also can't just add more cores, memory etc. It'll take time, R&D and money, especially money. Your CPU will not become any cheaper by doing that, on the contrary.

 

Also iGPUs are pretty useful for testing and troubleshooting a system without a discreet GPU. One less component to worry about.

I mean their cores look pretty modular. (They're all identical.) Doesn't seem like adding an additional set on the other side of the bit of cache would be very difficult. In fact, that's ESSENTIALLY what they did for 8th gen... and 9th gen....

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, corrado33 said:

How so? Everything on the chip is silicon. It doesn't matter what's put on the silicon, it all gets made in the same way. There's no special handling of cache sections that I know of.

More cache means

  1. More infrastructure to power it.
  2. More infrastructure to allow it to communicate with the rest of the chip.
  3. More stuff that can break.

This will all complicated design of the chip. Even to a certain extent the physical distance of the cache to the cores will matter and impact speed.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

That's a weird question - there ARE gpu-less processors with more cores and cache.

 

If you're asking why there are identical SKUs with or without iGPU in the low end, that's because increasing the specs of some of those would require developing a different chip, as well as separating the chips with an iGPU from those without one at the wafer level. Instead, Intel prefers printing an iGPU on all chips and burning/disabling it on the ones that will become xeons.

4 minutes ago, corrado33 said:

I mean their cores look pretty modular. (They're all identical.) Doesn't seem like adding an additional set on the other side of the bit of cache would be very difficult. In fact, that's ESSENTIALLY what they did for 8th gen... and 9th gen....

It's not that easy. The cores themselves are all the same, but getting them to work together properly is another matter entirely. Furthermore, the more cores you add on a single chip the more you risk having to throw away the whole thing because of a defect. This is the reason AMD can deliver a lot more cores at a significantly lower price, especially on the high end - they use 8 core modules and if one of them breaks, they only need to throw away (or downgrade) that module and not the whole cpu.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Sauron said:

That's a weird question - there ARE gpu-less processors with more cores and cache.

-snip-

Can you give me some examples? Is it basically just xeon vs. non xeon? 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Sauron said:

It's not that easy. The cores themselves are all the same, but getting them to work together properly is another matter entirely. Furthermore, the more cores you add on a single chip the more you risk having to throw away the whole thing because of a defect. This is the reason AMD can deliver a lot more cores at a significantly lower price, especially on the high end - they use 8 core modules and if one of them breaks, they only need to throw away (or downgrade) that module and not the whole cpu.

Question: Why don't intel use a modular system, say that extends in one direction per the number of cores. So if you get 8 cores in a row, then you cut that section out to be an 8 core chip, but if you only get 2 cores in a row before one isn't working, just cut the 2 cores out and make a lower end chip? Would that be possible? Is that what you're referencing about AMD?

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, corrado33 said:

Question: Why don't intel use a modular system, say that extends in one direction per the number of cores. So if you get 8 cores in a row, then you cut that section out to be an 8 core chip, but if you only get 2 cores in a row before one isn't working, just cut the 2 cores out and make a lower end chip? Would that be possible? Is that what you're referencing about AMD?

That's basically what's happening now AFAIK. Take a 6-core Coffee Lake chip that will become an i7 with 6C/12T. Hyper threading isn't up to par? Downgrade it to an i5 with 6C/6T. One or two faulty cores? Downgrade it to an i3 with 4C/4T. This is all a single die.

Now consider you're making a 32-core CPU. Creating a monolithic processor with 32 cores, i.e. on a single die, will be extremely expensive in the first place. Now let's say one core is broken: you've lost a 32-core CPU. What AMD is doing is combining 4 8-core dies into a 32-core package. This is more cost effective, because 8-core dies are inherently simpler to make and will likely have a higher yield than 32-core dies. If one core breaks, you'll have a 7 core die which can probably be used as a lower core count part of another chip.

Now, you may say, but that 31-core die can still be sold as a 31 or 30-core CPU or something, but the point is that you're much more likely to have a malfunctioning or otherwise slightly broken 32-core die than an 8-core die.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, tikker said:

That's basically what's happening now AFAIK. Take a 6-core Coffee Lake chip that will become an i7 with 6C/12T. Hyper threading isn't up to par? Downgrade it to an i5 with 6C/6T. One or two faulty cores? Downgrade it to an i3 with 4C/4T. This is all a single die.

Now consider you're making a 32-core CPU. Creating a monolithic processor with 32 cores, i.e. on a single die, will be extremely expensive in the first place. Now let's say one core is broken: you've lost a 32-core CPU. What AMD is doing is combining 4 8-core dies into a 32-core package. This is more cost effective, because 8-core dies are inherently simpler to make and will likely have a higher yield than 32-core dies. If one core breaks, you'll have a 7 core die which can probably be used as a 6-core part of another chip.

Now, you may say, but that 31-core die can still be sold as a 30-core CPU or something, but the point is that you're much more likely to have a malfunctioning or otherwise slightly broken 32-core die than an 8-core die.

Does that mean that AMD's high core count chips are inherently slower than a purpose built high count chip? I can't imagine a modular 8 core design section being as fast when talking with other modular 8 core sections compared to a single, purpose build 32 core chip?

 

Are intel's new massive core count chips modular as well? 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, corrado33 said:

Are intel's new massive core count chips modular as well? 

I don't think so.

 

1 minute ago, corrado33 said:

Does that mean that AMD's high core count chips are inherently slower than a purpose built high count chip? I can't imagine a modular 8 core design section being as fast when talking with other modular 8 core sections compared to a single, purpose build 32 core chip?

Haven't really kept up to date with AMD's design, but I do recall there was some impact on performance when e.g. using cores from different packages for a  multi-core workloads with <8 cores. Also their Infinity Fabric links communicate at your RAM's speed, hence why Zen likes fast memory.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, corrado33 said:

Can you give me some examples? Is it basically just xeon vs. non xeon? 

https://ark.intel.com/products/189126/Intel-Core-i9-9980XE-Extreme-Edition-Processor-24-75M-Cache-up-to-4-50-GHz-

 

everything outside the consumer space doesn't have an iGPU.

40 minutes ago, corrado33 said:

Question: Why don't intel use a modular system, say that extends in one direction per the number of cores. So if you get 8 cores in a row, then you cut that section out to be an 8 core chip, but if you only get 2 cores in a row before one isn't working, just cut the 2 cores out and make a lower end chip? Would that be possible? Is that what you're referencing about AMD?

That's sort of what AMD does, Intel is most likely working on it but they don't have the technology yet. You can't cut chips, but you can make smaller chips and link them together.

26 minutes ago, corrado33 said:

Does that mean that AMD's high core count chips are inherently slower than a purpose built high count chip?

Not at all - that would be the case if they didn't have a design that accommodates it, but they do. There are performance differences when you compare AMD chips to intel core for core, but they don't come from the modular design.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×