Jump to content

Why CPU GHz Doesn’t Matter!

Plouffe

There is a lot more to IPC than just clock speed, so we took two AMD CPUs with the same core count and similar speeds and pitted them against each other. How well will they perform when underclocked? The results might surprise you!

 

Buy AMD Ryzen 5 3600XT CPU
On Amazon (PAID LINK): https://geni.us/Lhrsi7j
On Best Buy (PAID LINK): https://geni.us/IU3BAnh
On Newegg (PAID LINK): https://geni.us/EfFbBAi

 

Buy AMD Ryzen 5 5600X
On Amazon (PAID LINK): https://geni.us/ZFWZpe
On Best Buy (PAID LINK): https://geni.us/PaiYTMz
On Newegg (PAID LINK): https://geni.us/rNHh6CI


On Amazon (PAID LINK): https://geni.us/EWsV
On Best Buy (PAID LINK): https://geni.us/rcWus
On B&H (PAID LINK): https://geni.us/yGIB

 

As per usual, affiliate links may provide compensation to LMG

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Plouffe said:

There is a lot more to IPC than just clock speed, so we took two AMD CPUs with the same core count and similar speeds and pitted them against each other. How well will they perform when underclocked? The results might surprise you!

 

Buy AMD Ryzen 5 3600XT CPU
On Amazon (PAID LINK): https://geni.us/Lhrsi7j
On Best Buy (PAID LINK): https://geni.us/IU3BAnh
On Newegg (PAID LINK): https://geni.us/EfFbBAi

 

Buy AMD Ryzen 5 5600X
On Amazon (PAID LINK): https://geni.us/ZFWZpe
On Best Buy (PAID LINK): https://geni.us/PaiYTMz
On Newegg (PAID LINK): https://geni.us/rNHh6CI


On Amazon (PAID LINK): https://geni.us/EWsV
On Best Buy (PAID LINK): https://geni.us/rcWus
On B&H (PAID LINK): https://geni.us/yGIB

 

As per usual, affiliate links may provide compensation to LMG

 

can someone just tell me how it went

 

can't watch videos bc school restrictions

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Finnegan1616 said:

can someone just tell me how it went

 

can't watch videos bc school restrictions

familiar?

Screenshot 2021-09-27 1.03.25 PM.png

| If someones post is helpful or solves your problem please mark it as a solution 🙂 |

I am a human that makes mistakes! If I'm wrong please correct me and tell me where I made the mistake. I try my best to be helpful.

System Specs

<Ryzen 5 3600 3.5-4.2Ghz> <Noctua NH-U12S chromax.Black> <ZOTAC RTX 2070 SUPER 8GB> <16gb 3200Mhz Crucial CL16> <DarkFlash DLM21 Mesh> <650w Corsair RMx 2018 80+ Gold> <Samsung 970 EVO 500gb NVMe> <WD blue 500gb SSD> <MSI MAG b550m Mortar> <5 Noctua P12 case fans>

Peripherals

<Lepow Portable Monitor + AOC 144hz 1080p monitor> 

<Keymove Snowfox 61m>

<Razer Mini>

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, SignatureSigner said:

familiar?

Screenshot 2021-09-27 1.03.25 PM.png

🥺

image.thumb.png.a1ee628de66012e9f33c1b72ddf3ec92.png

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Finnegan1616 said:

can someone just tell me how it went

 

can't watch videos bc school restrictions

GHz only means some things, IPC makes a big difference, even if same cores, same frequency.  (Tested a Ryzen 3600XT vs 5600X)  Even at identical speeds, the 5600 is /always/ faster.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, tkitch said:

GHz only means some things, IPC makes a big difference, even if same cores, same frequency.  (Tested a Ryzen 3600XT vs 5600X)  Even at identical speeds, the 5600 is /always/ faster.

IPC?

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Plouffe said:

There is a lot more to IPC than just clock speed

Not to be pedantic, but I see this all the time. IPC is Instructions Per Cycle, or Instruction Per Clock. Increasing or decreasing the clock speed has no affect on IPC at all, because it is defined as the number of Instructions per clock regardless of how fast or slow that clock is. These are two independent numbers.

 

A correct way of saying what I think you mean is, "There is a lot more to CPU performance than just clock speed" or, "IPC is just as important as clock speed for CPU performance."

BabyBlu (Primary): 

  • CPU: Intel Core i9 9900K @ up to 5.3GHz, 5.0GHz all-core, delidded
  • Motherboard: Asus Maximus XI Hero
  • RAM: G.Skill Trident Z RGB 4x8GB DDR4-3200 @ 4000MHz 16-18-18-34
  • GPU: MSI RTX 2080 Sea Hawk EK X, 2070MHz core, 8000MHz mem
  • Case: Phanteks Evolv X
  • Storage: XPG SX8200 Pro 2TB, 3x ADATASU800 1TB (RAID 0), Samsung 970 EVO Plus 500GB
  • PSU: Corsair HX1000i
  • Display: MSI MPG341CQR 34" 3440x1440 144Hz Freesync, Dell S2417DG 24" 2560x1440 165Hz Gsync
  • Cooling: Custom water loop (CPU & GPU), Radiators: 1x140mm(Back), 1x280mm(Top), 1x420mm(Front)
  • Keyboard: Corsair Strafe RGB (Cherry MX Brown)
  • Mouse: MasterMouse MM710
  • Headset: Corsair Void Pro RGB
  • OS: Windows 10 Pro

Roxanne (Wife Build):

  • CPU: Intel Core i7 4790K @ up to 5.0GHz, 4.8Ghz all-core, relidded w/ LM
  • Motherboard: Asus Z97A
  • RAM: G.Skill Sniper 4x8GB DDR3-2400 @ 10-12-12-24
  • GPU: EVGA GTX 1080 FTW2 w/ LM
  • Case: Corsair Vengeance C70, w/ Custom Side-Panel Window
  • Storage: Samsung 850 EVO 250GB, Samsung 860 EVO 1TB, Silicon Power A80 2TB NVME
  • PSU: Corsair AX760
  • Display: Samsung C27JG56 27" 2560x1440 144Hz Freesync
  • Cooling: Corsair H115i RGB
  • Keyboard: GMMK TKL(Kailh Box White)
  • Mouse: Glorious Model O-
  • Headset: SteelSeries Arctis 7
  • OS: Windows 10 Pro

BigBox (HTPC):

  • CPU: Ryzen 5800X3D
  • Motherboard: Gigabyte B550i Aorus Pro AX
  • RAM: Corsair Vengeance LPX 2x8GB DDR4-3600 @ 3600MHz 14-14-14-28
  • GPU: MSI RTX 3080 Ventus 3X Plus OC, de-shrouded, LM TIM, replaced mem therm pads
  • Case: Fractal Design Node 202
  • Storage: SP A80 1TB, WD Black SN770 2TB
  • PSU: Corsair SF600 Gold w/ NF-A9x14
  • Display: Samsung QN90A 65" (QLED, 4K, 120Hz, HDR, VRR)
  • Cooling: Thermalright AXP-100 Copper w/ NF-A12x15
  • Keyboard/Mouse: Rii i4
  • Controllers: 4X Xbox One & 2X N64 (with USB)
  • Sound: Denon AVR S760H with 5.1.2 Atmos setup.
  • OS: Windows 10 Pro

Harmonic (NAS/Game/Plex/Other Server):

  • CPU: Intel Core i7 6700
  • Motherboard: ASRock FATAL1TY H270M
  • RAM: 64GB DDR4-2133
  • GPU: Intel HD Graphics 530
  • Case: Fractal Design Define 7
  • HDD: 3X Seagate Exos X16 14TB in RAID 5
  • SSD: Inland Premium 512GB NVME, Sabrent 1TB NVME
  • Optical: BDXL WH14NS40 flashed to WH16NS60
  • PSU: Corsair CX450
  • Display: None
  • Cooling: Noctua NH-U14S
  • Keyboard/Mouse: None
  • OS: Windows 10 Pro

NAS:

  • Synology DS216J
  • 2x8TB WD Red NAS HDDs in RAID 1. 8TB usable space
Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, tkitch said:

Instructions Per Clock

I see

 

 

but does Ghz make a difference once it's set

 

i.e., overclocking your cpu from 4.0ghz to 5.3

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, HairlessMonkeyBoy said:

Not to be pedantic, but I see this all the time. IPC is Instructions Per Cycle, or Instruction Per Clock. Increasing or decreasing the clock speed has no affect on IPC at all, because it is defined as the number of Instructions per clock regardless of how fast or slow that clock is. These are two independent numbers.

 

A correct way of saying what I think you mean is, "There is a lot more to CPU performance than just clock speed" or, "IPC is just as important as clock speed for CPU performance."

 

9 minutes ago, Finnegan1616 said:

I see

 

 

but does Ghz make a difference once it's set

 

(11900k @ 4.0ghz vs 5.3)

This is exactly the kind of confusion that the inaccurate phrasing can cause.

 

To answer your question, @Finnegan1616, OCing your CPU will increase it's performance proportionally (more or less) to the OC, but it cannot affect IPC. IPC can be thought of (more or less) as a property of the CPU and it's specific architecture, and is independent from clock speed. IPC is useful when comparing different CPUs at the same clock speed, which it sounds like is what they did in the video with different generations of Ryzen CPUs (but I haven't watched it yet so could be wrong on that).

BabyBlu (Primary): 

  • CPU: Intel Core i9 9900K @ up to 5.3GHz, 5.0GHz all-core, delidded
  • Motherboard: Asus Maximus XI Hero
  • RAM: G.Skill Trident Z RGB 4x8GB DDR4-3200 @ 4000MHz 16-18-18-34
  • GPU: MSI RTX 2080 Sea Hawk EK X, 2070MHz core, 8000MHz mem
  • Case: Phanteks Evolv X
  • Storage: XPG SX8200 Pro 2TB, 3x ADATASU800 1TB (RAID 0), Samsung 970 EVO Plus 500GB
  • PSU: Corsair HX1000i
  • Display: MSI MPG341CQR 34" 3440x1440 144Hz Freesync, Dell S2417DG 24" 2560x1440 165Hz Gsync
  • Cooling: Custom water loop (CPU & GPU), Radiators: 1x140mm(Back), 1x280mm(Top), 1x420mm(Front)
  • Keyboard: Corsair Strafe RGB (Cherry MX Brown)
  • Mouse: MasterMouse MM710
  • Headset: Corsair Void Pro RGB
  • OS: Windows 10 Pro

Roxanne (Wife Build):

  • CPU: Intel Core i7 4790K @ up to 5.0GHz, 4.8Ghz all-core, relidded w/ LM
  • Motherboard: Asus Z97A
  • RAM: G.Skill Sniper 4x8GB DDR3-2400 @ 10-12-12-24
  • GPU: EVGA GTX 1080 FTW2 w/ LM
  • Case: Corsair Vengeance C70, w/ Custom Side-Panel Window
  • Storage: Samsung 850 EVO 250GB, Samsung 860 EVO 1TB, Silicon Power A80 2TB NVME
  • PSU: Corsair AX760
  • Display: Samsung C27JG56 27" 2560x1440 144Hz Freesync
  • Cooling: Corsair H115i RGB
  • Keyboard: GMMK TKL(Kailh Box White)
  • Mouse: Glorious Model O-
  • Headset: SteelSeries Arctis 7
  • OS: Windows 10 Pro

BigBox (HTPC):

  • CPU: Ryzen 5800X3D
  • Motherboard: Gigabyte B550i Aorus Pro AX
  • RAM: Corsair Vengeance LPX 2x8GB DDR4-3600 @ 3600MHz 14-14-14-28
  • GPU: MSI RTX 3080 Ventus 3X Plus OC, de-shrouded, LM TIM, replaced mem therm pads
  • Case: Fractal Design Node 202
  • Storage: SP A80 1TB, WD Black SN770 2TB
  • PSU: Corsair SF600 Gold w/ NF-A9x14
  • Display: Samsung QN90A 65" (QLED, 4K, 120Hz, HDR, VRR)
  • Cooling: Thermalright AXP-100 Copper w/ NF-A12x15
  • Keyboard/Mouse: Rii i4
  • Controllers: 4X Xbox One & 2X N64 (with USB)
  • Sound: Denon AVR S760H with 5.1.2 Atmos setup.
  • OS: Windows 10 Pro

Harmonic (NAS/Game/Plex/Other Server):

  • CPU: Intel Core i7 6700
  • Motherboard: ASRock FATAL1TY H270M
  • RAM: 64GB DDR4-2133
  • GPU: Intel HD Graphics 530
  • Case: Fractal Design Define 7
  • HDD: 3X Seagate Exos X16 14TB in RAID 5
  • SSD: Inland Premium 512GB NVME, Sabrent 1TB NVME
  • Optical: BDXL WH14NS40 flashed to WH16NS60
  • PSU: Corsair CX450
  • Display: None
  • Cooling: Noctua NH-U14S
  • Keyboard/Mouse: None
  • OS: Windows 10 Pro

NAS:

  • Synology DS216J
  • 2x8TB WD Red NAS HDDs in RAID 1. 8TB usable space
Link to comment
Share on other sites

Link to post
Share on other sites

How the hell can a video like this even be needed in this day?

 

Thought the world realized GHz wasn’t THE important factor back when Athlon XP launched. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Finnegan1616 said:

i.e., overclocking your cpu from 4.0ghz to 5.3

Clock speed is a measure of how many cycles a CPU can process per second. IPC is a measure of how many instructions it can process per cycle. Both values combined give you an idea how fast the CPU is. So higher clock speed or higher IPC would both increase a CPU's overall performance.

 

However IPC is essentially a fixed value that depends on a CPU's architecture. This is something that can only be improved by the manufacturer with a newer/better architecture. Clock speed on the other hand is something you can influence by overclocking.

 

To make things more complicated, IPC can depend on the particular workload. For example, a CPU may be able to process two 32-bit integer additions per clock cycle, but only one 64-bit float multiplication. So the value you get for IPC varies depending on the software/benchmark being used and what instructions it uses.

 

Different CPU architectures can have different strengths and weaknesses regarding their IPC (e.g. one is better at float, the other better at integer math). So the speed difference between different architectures can vary depending on the particular workload that's used to benchmark them.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

I’m not sure why, but I think the miner analogy kind of made it harder to follow the explantations. 
 

I feel like this is the classic type of video where the script reads good in Linus’ office, but when read (fast) to camera I don’t think comes out as well. 
 

To be clear not a bad video, but I do think videos like this need some improvement.   Any thoughts?

Link to comment
Share on other sites

Link to post
Share on other sites

Core count x clock speed x IPC x TDP. In the server world it's very often that TDP that determines what you can put inside your socket as well. Perhaps it would be a worthwhile look at how a dual socket system running a pair of 3.9GHz 8-core Xeons compares to something like a single socket Threadripper Pro with 16 cores at the same frequency. Does dual versus single socket improve or worsen performance? Is there even still a use case for a quad socket 8 core configuration? Having 64 cores on a single socket doesn't always work, not when some programs still crave frequency over cores and TDP confines limit you.

Link to comment
Share on other sites

Link to post
Share on other sites

While this is simple and attempts to explain what makes a performance difference of CPU architectures. It ignores the broad topic of Instruction Set Architectures and misses how the evolution of CISC and RISC platforms has caused these performance increases.

 

I think this could have been better covered using Quake and its implementation of Fast Inverse Square Root. This is an example of computer code that went from virtual to an actual hardware instruction. The other modern approach to computer code is Singe Instruction Multiple Data (SIMD).

 

I would invite you to checkout Dave's Garage, he has a great breakdown of the science behind Quake's Fast Inverse Square Root.

Link to comment
Share on other sites

Link to post
Share on other sites

I will try to keep myself short.... But having studied computer architecture design for over a decade makes one opinionated when it comes to oversimplified videos like these.

 

First off, IPC as a term is a bit stupid at the moment. Since it conflicts with the term IPC. Ie, Inter Process Communication, a topic that is fairly central to more multi threaded workloads. The instruction rate per cycle could use another term instead of Instruction Per Cycle.

 

I think IR for Instruction Rate works better, since it isn't conflicting with anything else in computer architecture design. Or at least nothing so generally applicable for discussing overall performance of a given implementation. (And a "rate" is a measure that is measured against some other measure. Like the average amount of Instructions in a typical cycle.) Though, we should also consider the difference between the peak theoretical instruction rate, and actual observed ones. There is also the peak theoretical serial instruction rate of a given implementation, this is fairly important as far as single threaded applications goes. (But to explain this we would need to deep dive into the intricacies of out of order execution, but in short, out of order execution is short term parallelism between instruction calls that aren't strictly dependent on each other. Instruction calls that are fully dependent on prior calls won't see any benefit from out of order execution and will in general be slowed down by the additional control logic needed for that function. Ie, out of order is a fine balance between serial overhead and parallelism. (And I shouldn't get into how branches, stalls, and Simultaneous Multi Threading makes out of order systems even more fun in some architecture implementations. But suffice to say, out of order can neglect the whole branching problem if one's decoder can keep up. But inevitable stalls can be solved by a higher degree of SMT, since statistically our other threads are unlikely to stall at the same time. Also, more than x2 SMT makes Port Smash a semi non issue, mainly since no thread is truly random in its behavior and can therefor be nearly impossible to filter out from our target, but this is more true if we have a lot of threads per core.))

 

Then we have the question in regards to why clock speeds have leveled off during the last decade and a half.

The answer is simple. And it partly is thermals and the challenges of heat sink design keeping up with increasing power consumption. But it is also due to realizations surrounding parallelism. For a while CPUs were running at far higher clock speeds than what were actually efficient. Switching to more multi cored designs had their own side effects, primarily that they were harder to cool, but the increased area used by the cores made life for heat sink designers at least a bit less insane. (It isn't a coincidence that clocks stopped increasing when multi cored designs came to market. The same phenomenon can be observed in other architectures earlier, and improvements in the out of order system also made chasing higher clock speeds less attractive. (all though, clock speeds do still increase, just a lot slower.))

 

But it isn't always the internals of the core that defines peak performance. There is potential bottlenecks everywhere. Be it the bandwidth of our caching system, how it is structured, and the latency budget that our various queues (prefetching, decode, out of order) offer. Though, for some applications, the main memory bandwidth can be the real bottleneck and cache can rarely fix that issue for more memory intensive applications. (For an example, if we randomly jump around in a dataset that dwarfs our cache 10/1, then we only have a 10% likelihood that our data is in cache, If our thread has all our cache for itself. And realistically this is likely never going to happen. And these memory intensive workloads aren't rare, everything from photo and video editing, particle physics, 3D rendering, neural networks, or other large datasets like a Dwarf Fortress World getting simulated.)

 

But at least the video didn't venture down the rabbit hole that is RISC vs CISC, since that debate is frankly a bit silly here on the internet. (the typical quote people make that "RISC runs fewer instructions to execute the same workload and therefor finishes faster." is completely incorrect... RISC is following the philosophy that one should only have explicitly the tools one actually needs, some niche tool that is really good for some odd application can be left out. While in CISC, one doesn't care about that and includes everything but the kitchen sink. (I am highly oversimplifying here! There is more defining differences between the terms.) Though, for an application to use said instructions, one do need to compile the code with them in mind. Ie, the older CPUs one wants to support, the fewer new features will be used by the complier, and CPUs aren't magic, they don't typically figure out that one can run some other instruction instead of one's 100 instruction long code.)

 

 

Though, I should get about making a prototype of my own CPU some day and send one over. I think LTT would find it perplexing to say the least. (it is an odd architecture, it started life as a 48 bit RISC architecture, but is now more CISC in terms of its instruction portfolio, but RISC in its mechanism for doing instruction calls. And it is optimized for a monolithic caching system, so no L1, 2 and 3, and it does fractional mathematics instead of float. Also, it is only 48 bit in its address space as to save in on memory bandwidth requirements and get overall more dense code thanks to smaller pointers. And by the time one needs more than 256TB of RAM, then one won't be particularly interested in direct memory access regardless since other data management systems has some major advantages by then. Not that this architecture focuses on HPC applications regardless.)

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Nystemy said:

I will try to keep myself short.... But having studied computer architecture design for over a decade makes one opinionated when it comes to oversimplified videos like these.

 

First off, IPC as a term is a bit stupid at the moment. Since it conflicts with the term IPC. Ie, Inter Process Communication, a topic that is fairly central to more multi threaded workloads. The instruction rate per cycle could use another term instead of Instruction Per Cycle.

 

It would be nice if the LTT team actually disassembled their test applications to see what specialized instructions they rely upon. Classing them by "workload" type. There are always lots of changes to how CPUs handle special instructions over time. Look at the AVX family for example, there are 3 variants and AMD only supports two of them.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, echos said:

It would be nice if the LTT team actually disassembled their test applications to see what specialized instructions they rely upon. Classing them by "workload" type. There are always lots of changes to how CPUs handle special instructions over time. Look at the AVX family for example, there are 3 variants and AMD only supports two of them.

There is far more than just AVX to be fair.
Though, a given application can be compiled for CPUs that even consider SSE1 as a new fancy extension. And this is a thing that a lot of CPU manufacturers have to consider, when a new feature gets made it rarely gets used due to default settings in most compilers and such.

So running benchmarks on new CPUs rarely gives a good picture of the true uptick in performance, only as the software gets updated to make use of the new instructions will it actually impact performance as the manufacturer had planed for.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, echos said:

While this is simple and attempts to explain what makes a performance difference of CPU architectures. It ignores the broad topic of Instruction Set Architectures and misses how the evolution of CISC and RISC platforms has caused these performance increases.

 

I think this could have been better covered using Quake and its implementation of Fast Inverse Square Root. This is an example of computer code that went from virtual to an actual hardware instruction. The other modern approach to computer code is Singe Instruction Multiple Data (SIMD).

 

I would invite you to checkout Dave's Garage, he has a great breakdown of the science behind Quake's Fast Inverse Square Root.

ISA alone no longer seems to make a meaningful difference, as techniques used to improve throughput (superscalar, pipelining, OoO execution, SIMD, etc) are quite similar between even CISC and RISC architectures. And quite frankly, the line has heavily blurred as x86 has adopted internal ops more similar to RISC, while ARM has become more complex. 
 

At this point in time, I feel RISC vs CISC to be entirely moot. Design the ISA for the task to be done. Whatever it gets categorized as is irrelevant. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, HairlessMonkeyBoy said:

 

This is exactly the kind of confusion that the inaccurate phrasing can cause.

 

To answer your question, @Finnegan1616, OCing your CPU will increase it's performance proportionally (more or less) to the OC, but it cannot affect IPC. IPC can be thought of (more or less) as a property of the CPU and it's specific architecture, and is independent from clock speed. IPC is useful when comparing different CPUs at the same clock speed, which it sounds like is what they did in the video with different generations of Ryzen CPUs (but I haven't watched it yet so could be wrong on that).

 

20 hours ago, Eigenvektor said:

Clock speed is a measure of how many cycles a CPU can process per second. IPC is a measure of how many instructions it can process per cycle. Both values combined give you an idea how fast the CPU is. So higher clock speed or higher IPC would both increase a CPU's overall performance.

 

However IPC is essentially a fixed value that depends on a CPU's architecture. This is something that can only be improved by the manufacturer with a newer/better architecture. Clock speed on the other hand is something you can influence by overclocking.

 

To make things more complicated, IPC can depend on the particular workload. For example, a CPU may be able to process two 32-bit integer additions per clock cycle, but only one 64-bit float multiplication. So the value you get for IPC varies depending on the software/benchmark being used and what instructions it uses.

 

Different CPU architectures can have different strengths and weaknesses regarding their IPC (e.g. one is better at float, the other better at integer math). So the speed difference between different architectures can vary depending on the particular workload that's used to benchmark them.

 

20 hours ago, Spindel said:

How the hell can a video like this even be needed in this day?

 

Thought the world realized GHz wasn’t THE important factor back when Athlon XP launched. 

I watched it when I got home

 

 

seems like PFC = CSxIPC

 

 

 

in other words, saying one is more important is kind of flexing your ignorance, but by today, almost all CPUs can clock to over 4Ghz, so you can sort of rely on IPC, assuming it's from a known company like Intel or AMD

Link to comment
Share on other sites

Link to post
Share on other sites

First of all i talked about it on the forums a year ago:

On 1/14/2021 at 3:15 AM, Vishera said:
On 1/13/2021 at 11:55 PM, porina said:

Architecture IPC doesn't change with clock,

I know that,it's a common misconception.

I know it well,I even went to the length of calculating the Instructions per second to someone here in the forums:

  Hide contents

 

On 12/3/2020 at 10:11 PM, Vishera said:
On 12/3/2020 at 9:59 PM, minibois said:

There is no one answer.

Let's assume a world where these two products can actually exist and let's assume the IPC (instruction per clock, 'what it can do per Hz') is the same:

 

it will still depend on the program.

Is the program heavily parallelized or not?

 

I think most programs would run faster on a 8Ghz 1 core CPU, just because it can do the tasks one after another. But some tasks don't require a single task to go one after another, but may need multiple tasks done at the same time.

Expand   Expand  

Well,it's possible to answer it if you know the related equations.

Quote

IPC = X


8GHz = 8000MHz
0.8GHz = 800MHz

1 MHz = 1 million Hertz

 

80 cores 0.8GHz:
800,000,000X*80 = 64,000,000,000X

 

64,000,000,000X Instructions per second

 

Single core 8GHz
8,000,000,000X Instructions per second

Expand  Expand  

 

Expand  

 

 

 

My comment was on the basis that in heavy instruction sequences like AVX,the instructions per clock are lower than most instructions used.

On 1/13/2021 at 11:55 PM, porina said:

Power consumption is irrelevant to IPC.

True,i mixed here IPC and IPS (for a single core).

 

 

Second:

21 hours ago, Plouffe said:

There is a lot more to IPC than just clock speed

That's not accurate,or even out right wrong.

IPC is not Single core performance - A common misconception that is the root of the problem,yet wasn't mentioned in the video at all!

 

IPC x Clock Speed = Instructions Per Second (for a single core)

And:

Instructions Per Second for a single core = Single Core Performance.

 

It's important to note that gaming is a mixed workload that benefits from both multi-threaded performance (to some degree) and single core performance.

 

Also, How can you make a video about the topic without mentioning multi-threaded performance,single core performance and Instructions Per Second!?

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Zodiark1593 said:

ISA alone no longer seems to make a meaningful difference, as techniques used to improve throughput (superscalar, pipelining, OoO execution, SIMD, etc) are quite similar between even CISC and RISC architectures. And quite frankly, the line has heavily blurred as x86 has adopted internal ops more similar to RISC, while ARM has become more complex. 
 

At this point in time, I feel RISC vs CISC to be entirely moot. Design the ISA for the task to be done. Whatever it gets categorized as is irrelevant. 

It's also worth noting that implementations are much more important than the ISA itself, according to Jim Keller:

https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-matter/

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Finnegan1616 said:

 

in other words, saying one is more important is kind of flexing your ignorance, but by today, almost all CPUs can clock to over 4Ghz, so you can sort of rely on IPC, assuming it's from a known company like Intel or AMD

My CPU only goes to 3,2 GHz and still beats most 4 GHz systems in single threaded performance 😛

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Spindel said:

My CPU only goes to 3,2 GHz and still beats most 4 GHz systems in single threaded performance 😛

what about most 5GHz systems...

 

 

also I don't see how saying that was relevant

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×