Jump to content

Adding more cores doesnt improve performance-intel exec

parthm

Just leave any modern browser open in the background while gaming, perhaps with a youtube playing on a 2nd display, or just for music while hidden.  Boom, cores used.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, krakkpott said:

He's being paid by Intel to say pro-Intel stuff... ?

It's just hilarious, Intel literally hired him not to be inside own echo chamber. And behold, he's literally doing the exact same dumb thing Intel was doing without him. Being convinced they are the best there is even though they clearly aren't anymore. Then again that's the typical problem of big companies. They refuse to accept problems until it's too late and then everyone is shitting bricks instead of listening to that guy who says "Hey, maybe we should change our approach", but they threw him out the window few months ago because he said that and that's a no go because saying things like this means you're talking bad about ppl above your paygrade and that's also a no no. Every day corporate bullshittery 101.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/3/2019 at 7:26 AM, VegetableStu said:

"our 8 core stuff is on par with their 12-core stuff" <- okay fair if quantifiable

"no one uses more than x cores" <- easily carbon dateable, LOL

No one needs more than 640KB of RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

Okay, I know this has been said a million times, but it is HILARIOUS to hear Intel calling eight cores "the sweet spot".

i7 2600k @ 5GHz 1.49v - EVGA GTX 1070 ACX 3.0 - 16GB DDR3 2000MHz Corsair Vengence

Asus p8z77-v lk - 480GB Samsung 870 EVO w/ W10 LTSC - 2x1TB HDD storage - 240GB SATA SSD w/ W7 - EVGA 650w 80+G G2

3x 1080p 60hz Viewsonic LCDs, 1 glorious Dell CRT running at anywhere from 60hz to 120hz

Model M w/ Soarer's adapter - Logitch g502 - Audio-Techinca M20X - Cambridge SoundWorks speakers w/ woofer

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, comander said:

2010 for 6 core parts. 

 

i7 970 and 980.

 

They actually outperformed the Sandy Bridge CPUs in MT, through SB had ~30% better per core performance and was cheaper. 

We were talking about 8c/16t

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

I've had 8 threads with i7 920 back in 2010. I've had 12 threads on i7 5820K since 2015. And I'm still on same CPU today. When I'll be upgrading, I'll double the core count or not even bother at all. But I'd really prefer to jump to 16 cores, resulting in 32 threads. Doing any less would just be a meme.

Link to comment
Share on other sites

Link to post
Share on other sites

What? I thought intel only benchmarked windows media player? What are games?

 

Of course core count doesn't mean better performance in this scenario. Most games aren't coded to utilize more then 4 cores yet.

 

Unfortunately for intel there's more to performance then just games. Especially on a workstation or the HEDT platform,

 

Shrout with more bullshit as per usual since joining intel. Nothing really to see here.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/3/2019 at 1:53 PM, WereCatf said:

Sure, 8 cores could certainly be the sweet spot now, but take, say, 4-5 years from now? It could well be that there are games that can use all 16 cores. Pushing for more cores lets devs go wild, it adds headroom for them to play with.

iirc i heard from a developer working on thread optimized mmo engine that performance scaling above 16 threads isnt good. some things just have to be done in sequence, not in parallel. so statement about 8 cores (16 threads with HT) being sweet spot seem to be valid a little bit

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

After looking at the actual article in question, the only statement that's truly dubious I found is:

Quote

The data backs up what Intel has been saying since its release: The Core i9–9900K is the world’s best gaming processor and the new Core i9–9900KS will be even faster.

He didn't compare anything against AMD's offerings. But if you were limit yourselves in an Intel-based bubble, then yes, the 16-core i9-9980XE is a pretty terrible idea to use in a PC built for gaming.

 

However the statement after that is true regardless of what you want to believe:

Quote

Adding core count just because you can, without a corresponding increase in sustained frequency and architectural design decisions necessary to feed these cores (like low latency memory systems), doesn’t result in better performance.

The problem with adding more cores is that those cores eat into a specific power budget and you still need to feed them with data. I don't think consumers want 250W CPUs and you can only do so much with a 95-120W power envelope. Regarding feeding the processor with data, think about it this way: let's take DDR4-2133's bandwidth of 17.066 GB/sec1. Let's say we have a task that feeds a processor with 1GB/sec per core and processor's core can easily crunch this and output 2GB/sec per core. So a 4-core processor uses a total of 12 GB/sec (1GB per core to feed x4 plus 2GB of output x4). If we put this task on an 8-core processor, now we need a total of 24 GB/sec, which DDR4-2133 cannot do. In addition, higher latency memory systems2 will lower performance.

 

To dig further, adding another core only helps with certain classes of problems. Architectural improvements generally improves the performance of most problems. Clock speed improves the performance of all problems. But to avoid falling into the trap the article is trying to say in the overall scheme of things, those improvements by themselves don't simply mean better performance overall. If clock speed is king but your architecture is lacking, performance will suck. If your architecture is great, but it can't run very fast, then performance will suffer.

 

Then Ryan says this afterwards:

Quote

The software engines that power games across the PC ecosystem scale best with frequency and IPC

In the case of games, this is generally true. Games are not highly scalable and/or constant feeding workloads unlike say POV Ray, Cinebench, or crunching a video file. Games do not compute the next state of the game world the moment the current one is finished. Games will crunch what they need and whatever time is left over until the next time slice will be used for rendering video. In addition, the workload games have are highly variable, unlike the other examples that have little variation. So while it may seem like games are beginning to look more multi-core friendly, there's really only so many threads they have available to run before performance begins to flat-line and no amount of optimization will cause it to scale across cores. While this would probably be better suited in another topic, I do encourage people to read: https://www.gamasutra.com/view/feature/130296/the_top_10_myths_of_video_game_.php

 

Anyway, the whole point of the article, when you strip out the Intel marketing, is that you shouldn't believe "more of a single aspect of a processor is always better."

 

1 DDR DRAM is half-duplex, so both incoming and outgoing traffic are counted in the bandwidth

2 "Higher latency" means a higher time from the data request to data reception, not the latency spec in RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Mira Yurizaki said:

After looking at the actual article in question, the only statement that's truly dubious I found is:

He didn't compare anything against AMD's offerings. But if you were limit yourselves in an Intel-based bubble, then yes, the 16-core i9-9980XE is a pretty terrible idea to use in a PC built for gaming.

 

However the statement after that is true regardless of what you want to believe:

The problem with adding more cores is that those cores eat into a specific power budget and you still need to feed them with data. I don't think consumers want 250W CPUs and you can only do so much with a 95-120W power envelope. Regarding feeding the processor with data, think about it this way: let's take DDR4-2133's bandwidth of 17.066 GB/sec1. Let's say we have a task that feeds a processor with 1GB/sec per core and processor's core can easily crunch this and output 2GB/sec per core. So a 4-core processor uses a total of 12 GB/sec (1GB per core to feed x4 plus 2GB of output x4). If we put this task on an 8-core processor, now we need a total of 24 GB/sec, which DDR4-2133 cannot do. In addition, higher latency memory systems2 will lower performance.

 

To dig further, adding another core only helps with certain classes of problems. Architectural improvements generally improves the performance of most problems. Clock speed improves the performance of all problems. But to avoid falling into the trap the article is trying to say in the overall scheme of things, those improvements by themselves don't simply mean better performance overall. If clock speed is king but your architecture is lacking, performance will suck. If your architecture is great, but it can't run very fast, then performance will suffer.

 

Then Ryan says this afterwards:

In the case of games, this is generally true. Games are not highly scalable and/or constant feeding workloads unlike say POV Ray, Cinebench, or crunching a video file. Games do not compute the next state of the game world the moment the current one is finished. Games will crunch what they need and whatever time is left over until the next time slice will be used for rendering video. In addition, the workload games have are highly variable, unlike the other examples that have little variation. So while it may seem like games are beginning to look more multi-core friendly, there's really only so many threads they have available to run before performance begins to flat-line and no amount of optimization will cause it to scale across cores. While this would probably be better suited in another topic, I do encourage people to read: https://www.gamasutra.com/view/feature/130296/the_top_10_myths_of_video_game_.php

 

Anyway, the whole point of the article, when you strip out the Intel marketing, is that you shouldn't believe "more of a single aspect of a processor is always better."

 

1 DDR DRAM is half-duplex, so both incoming and outgoing traffic are counted in the bandwidth

2 "Higher latency" means a higher time from the data request to data reception, not the latency spec in RAM.

It's almost like a professional in a professional environment has made some statements that hold water when analyzed.  But the rest of the world being internet plebs with zero experience or education on the matter just want to rip shreds of said professional. 

 

As always be skeptical of someone who makes claims about he product his company sells (its the golden rule right?), but by the same token if people are only interested in making dumb arse posts to call that person a liar and full of shit, then the reality is they are less educated than the target of their insults.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Still using an E8400 to browse this thread.

This old pal is hanging tough.( no issues whatsoever )

This is madness though CPUs are aging faster than they used to be 

46 minutes ago, mr moose said:

It's almost like a professional in a professional environment has made some statements that hold water when analyzed.  But the rest of the world being internet plebs with zero experience or education on the matter just want to rip shreds of said professional. 

 

As always be skeptical of someone who makes claims about he product his company sells (its the golden rule right?), but by the same token if people are only interested in making dumb arse posts to call that person a liar and full of shit, then the reality is they are less educated than the target of their insults.

Well now i am lost.

People all around me always claim Intel is better and make fun of AMD (overheating and can't run minecraft bla bla bla )

Some people however humiliate Intel statements when finally AMD used there brain once after Sandy bridge

IMHO ,

I can't blame Intel at all ,they are in a strict situation and can do nothing about it. And i bet when AMD was in the same position they were forced to say kinda similar things.

People should get out of this Intel vs AMD madness. 

This WAS EXPECTED TO HAPPEN.

Do u think Intel will just give up ?

No way no company does that . Even if we go to the Nvidia vs AMD scene we will see if anyone manages to beat the other, we will see kinda similar statements.( Even VIA didn't give up with Zhaoxin announcement in 2018 )

Saying this from a completely objective standpoint.

Now from my standpoint :

The market needs a third CPU manufacture ! Damn, Either Nvidia ,Via ,or Qualcomm to stop this madness.

-From the dark and indefinite void  

 

Please quote or tag me @Void Master,so i can see your reply.

 

Everyone was a noob at the beginning, don't be discouraged by toxic trolls even if u lose 15 times in a row. Keep training and pushing yourself further and further, so u can show those sorry lots how it's done !

Be a supportive player, and make sure to reflect a good image of the game community you are a part of. 

Don't kick a player unless they willingly want to ruin your experience.

We are the gamer community, we should take care of each other !

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Void Master said:

Well now i am lost.

People all around me always claim Intel is better and make fun of AMD (overheating and can't run minecraft bla bla bla )

These would be the Internet plebs I referred to earlier.  Don't listen to them, Anyone who makes fun of another company in an effort to denigrate or dissuade a user from buying them hasn't really got a legitimate reason to give advice.

 

Quote

Some people however humiliate Intel statements when finally AMD used there brain once after Sandy bridge

Again, they are internet plebs.  When someone from any company makes a statement and releases information on their products we should be weary of said reports relevance to us individually, but not quick to make stupid comments outright condemning their entire existence.

 

Quote

IMHO ,

I can't blame Intel at all ,they are in a strict situation and can do nothing about it. And i bet when AMD was in the same position they were forced to say kinda similar things.

People should get out of this Intel vs AMD madness. 

This WAS EXPECTED TO HAPPEN.

Do u think Intel will just give up ?

No way no company does that . Even if we go to the Nvidia vs AMD scene we will see if anyone manages to beat the other, we will see kinda similar statements.( Even VIA didn't give up with Zhaoxin announcement in 2018 )

Saying this from a completely objective standpoint.

Now from my standpoint :

The market needs a third CPU manufacture ! Damn, Either Nvidia ,Via ,or Qualcomm to stop this madness.

-From the dark and indefinite void  

 

 

The thing is, as you have essentially pointed out, all companies do this and all companies are right to a degree.  My posts was more aimed at the seemingly automated responses outright claiming (in this case) that Ryan was lying and spreading BS as the evidence is fairly far from that. 

 

I just want to reiterate that healthy skepticism for such information is exactly that "healthy", what is not healthy is this insistent and out of control jumping up and down looking for a lynch mob.  All it does is add fuel to the naive rhetoric we see all the time about company A being hot and slow or company B being anti consumer and all lies or company C being the only choice etc.   

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Void Master said:

This is madness though CPUs are aging faster than they used to be

Actually, the inverse is true.  When compared to the 90's (and even early-mid 2000's), you can hold onto processors for far longer periods of time.  As you already pointed out, it's still viable - albeit at reduced performance - to use 10+ year old processors today, and that's even true in gaming systems.  Back in the 90's (especially the mid-late 90's), if you were running a 5 year old processor, it was woefully outdated and couldn't hope to keep up.  That's no longer the case.

 

All you're seeing right now is a slight resurgence in the CPU "arms race".  Ironically, I don't believe core count would be as significant if it weren't for the technology sharing agreement between AMD and Intel.  Otherwise - ignoring for the moment that we likely wouldn't have an AMD vs Intel landscape without it - I envision the competition between them being more about differing features than about speed and cores.

Link to comment
Share on other sites

Link to post
Share on other sites

The real answer here is always, and has always been, "it depends".

 

If you're running something that can benefit from more parallel threads, throwing more cores at it will always help.  It may not help as much as it could if it is also limited in another way (such as the memory bandwidth/latency example) when under a specific workload that is constrained by that, but it will still help overall.

 

That being said, not everything takes advantage of this, especially games.  However, having extra cores both helps "future proof" as well as allow for more background things (such as listening to music or playing a youtube video on a 2nd display) at the same time without much if any performance penalty.

 

So, the intel statement is true in the sense that more cores isn't always going to help a specific performance situation and just be better for you.  But it is also not true in that more cores aren't always better.  They are, you just might not be able to make appropriate use of them today in your specific (or every) performance situation and workload.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×