Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Same cores, same cache, same multithreading, why are the benchmarking scores different?

Hello LTT community,

I'm currently looking at the specs of the best CPUs out there right now. One of the things that I've noticed is that some CPUs with the same specs (Example Ryzen 2700X and i9 9900K) have different benchmarking scores. Can somebody help me explain why?

Sincerely,

ErebusGamer16

Link to comment
Share on other sites

Link to post
Share on other sites

IPC differences and clock speed differences.

Ryzen 5 1600 @ 3.9 Ghz  | Gigabyte AB350M Gaming 3 |  PaliT GTX 1050Ti  |  8gb Kingston HyperX Fury @ 2933 Mhz  |  Corsair CX550m  |  1 TB WD Blue HDD


Inside some old case I found lying around.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Because they process the same information differently.

🌲🌲🌲

Judge the product by its own merits, not by the Company that created it.

Link to comment
Share on other sites

Link to post
Share on other sites

Just because they are compatible, doesn't mean they are the same.  The actual silicon and how they execute those instructions is completely different.

 

Ghz is only a useful measure against identical chips, because a CPU is not just a single instruction and instruction set any more.  Even on the same CPU there are multiple ways to do the same maths, it may take ten instructions, five instructions or one instruction, depending on how you actually execute it.  Thus there can be a huge difference in actual performance.

Router:  Quotom-Q555G6-S05 running pfSense WiFi: Zyxel NWA210AX (~940Mbit peak)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen VDSL (~74Mbit) + VOXI 4G [Vodafone] (~120Mbit) + Three 5G (~500Mbit average)

Link to comment
Share on other sites

Link to post
Share on other sites

Well, since AMD and Intel are competitors, they build their CPUs different from one another. The first difference is the clock speed, which you can tell from the spec sheet. The second is the "instructions per clock," which varies due to the construction differences.

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k (won) - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

Linux Proliant ML150 G6:

Dual Xeon X5560 - 24GB ECC DDR3 - GTX 750 TI - old Seagate 1.5TB HDD - dark mode Ubuntu (and Win7, cuz why not)

 

How many watts do I need? Seasonic Focus thread, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, ErebusGamer16 said:

Hello LTT community,

I'm currently looking at the specs of the best CPUs out there right now. One of the things that I've noticed is that some CPUs with the same specs (Example Ryzen 2700X and i9 9900K) have different benchmarking scores. Can somebody help me explain why?

Sincerely,

ErebusGamer16

The main difference between the i9900K and the R7 2700X is the instructions per clock (ipc) and clock speed of each cpu. The I9 9900K has a higher boost clock and better ipc than the R7 2700X so the I9 is faster than the R7 2700X. 

Quote or tag me @Lemtea so I can see your reply. 

PSU Tier List


DAYBREAK: R5 5600X | SAPPHIRE PULSE RX 6700XT | 32GB RAM | 1TB 970 EVO PLUSCRUCIAL MX200 1TB SSD | 4TB HDD | CORSAIR TX650M | PURE BASE 500DX | Win 10
FIRESTARTER: I5 760 @ 4.0GHZ | XFX R9 280X DD | 8GB RAM | CRUCIAL MX500 250GB SSD | OCZ ZX 1000W | CM 690 IIIWin 10
KEYBOARD & MOUSE | CORSAIR STRAFE RGB (MX RED) | GLORIOUS MODEL D | STEELSERIES QCK XXL
LAPTOP: DELL XPS 15 9570 i7 8750H | GTX 1050TI MAX Q | 16GB RAM | 500GB PCIE SSD | 4K TOUCHSCREEN Win 10 PRO
Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, fasauceome said:

Well, since AMD and Intel are competitors, they build their CPUs different from one another. The first difference is the clock speed, which you can tell from the spec sheet. The second is the "instructions per clock," which varies due to the construction differences.

The problem is understanding what instructions per clock actually means.  Its insanely complicated as which instruction you use can mean it performs different.

So you could have a really high IPC on one chip, and a lower value on another, but if you are using instructions which can process twice as much data in one instruction, the slow one can perform faster.

A good example of this is AES acceleration, where it takes a whole bunch of instructions to do the encryption normally but this can be simplified down to a specialised instruction set which is much faster at that job.  The IPCs may be the same on two CPUs, but if one needs less instructions to do the same thing, it performs faster.  This is why in the past we got a lot of new instruction sets on top of the increase in IPC.  Now that improvement is getting slower we see smaller increases. 

This is also why two CPUs which appear comparable on the surface can differ dramatically between pieces of software.  Some software will be optimised to certain instructions that run best on Intel, some on AMD, some might just have made really poor choices and happen to run faster on one than the other, its very complicated.  It gets even worse when you consider that CPUs also "simulate" old instruction sets using microcode, rather than having to include the silicon to support older instructions.

I literally just watched this video which reminded me of some of this stuff.

 

Router:  Quotom-Q555G6-S05 running pfSense WiFi: Zyxel NWA210AX (~940Mbit peak)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen VDSL (~74Mbit) + VOXI 4G [Vodafone] (~120Mbit) + Three 5G (~500Mbit average)

Link to comment
Share on other sites

Link to post
Share on other sites

Clock speed is a big factor but also IPC. CPUs aren't just black boxes that you feed stuff into and stuff comes out. There are different types of operations, different CPUs might split up the instructions in different ways, some CPUs may have more FPUs (Floating Point Units, basically do math with floats), for example. Or these units may be more or less efficient. As well, same cache size doesn't take into account differences (e.g. in latency, memory bandwidth, I'm also guessing you only looked at L3 cache whereas there is also L1 and L2, caches may be handled differently (9900k's L3 is a write-back cache vs victim cache of the 2700x)). Also 2700x has the weird CCX thing so core-to-core latency between cores in different CCX's is MUCH higher so accessing the cache in a different core can often have as much latency as accessing DRAM! 

 

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah back in the day cheap CPUs had awful FPUs which caused horrific performance for the same Ghz speed on some games and applications.  Thus is the danger of promoting Ghz as a measure of performance, which it used to be back in the early days.

 

I mean just look at phones, a 3Ghz phone isn't going to come close to a 3Ghz Intel or AMD desktop CPU.  They run a much simpler instruction set so effectively you have to use more instructions to do the same tasks, at least for generic functions.  Its why phones were pretty crap until they added video acceleration.

Today the only real measure of performance is to actually try it, thus why benchmarks are such a big thing.  But its also a risk there as benchmarks can only test a certain workload, which could hide performance issues in certain instructions.  Which again is why we now have in-game and application benchmarks, to test using the real-world code you will be running to see which works best for your use case.

Router:  Quotom-Q555G6-S05 running pfSense WiFi: Zyxel NWA210AX (~940Mbit peak)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen VDSL (~74Mbit) + VOXI 4G [Vodafone] (~120Mbit) + Three 5G (~500Mbit average)

Link to comment
Share on other sites

Link to post
Share on other sites

Because their architecture are different, basically.

Personal Desktop":

CPU: Intel Core i7 8700 @4.45ghz |~| Cooling: Cooler Master Hyper 212X |~| MOBO: Gigabyte Z370M D3H mATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: nVidia Founders Edition GTX 1080 Ti |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: Intel Core i7 10700KF @ 5.0Ghz (5.1Ghz 4-core) |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Z490 UD |~| RAM: 32G Kingston HyperX @ 2666Mhz CL13 |~| GPU: AMD Radeon RX 6800 (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

Because one is built one way by one company and the other, another...essentially. 

ლ(ಠ益ಠ)ლ
(ノಠ益ಠ)╯︵ /(.□ . \)

Link to comment
Share on other sites

Link to post
Share on other sites

A parallel to this is the new RTX GPUs.  They actually bring more than just Raytracing and DLSS, they have new instructions too that will allow rendering a scene faster than the GTX cards.  But until software actually uses them, we only see the IPC performance increase.

 

As adding new instructions to CPUs and GPUs is less common these days, people focus only on the speed boost from IPC improvements.  New instructions are not to be sneezed at, its just a bitter pill to swallow when you are paying for those improvements with real cash money and not seeing the improvements yet.  But those of us who have been using PCs since the days when new instructions were coming thick and fast are naturally more accepting of it, thus the disconnect between the outraged people who think RTX is a rip off, and those of us who can see the potential for the future.

We are hitting a wall with just how much faster we can make chips.  The first step was to just add more cores so you can do more of the same instructions at the same time, but now we are hitting that limit too.  So its natural that we have gone back to trying to find new instructions to do things faster, people have just forgotten (or are just too young) to remember how long it takes for those improvements to develop into real-world benefits.

Router:  Quotom-Q555G6-S05 running pfSense WiFi: Zyxel NWA210AX (~940Mbit peak)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen VDSL (~74Mbit) + VOXI 4G [Vodafone] (~120Mbit) + Three 5G (~500Mbit average)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×