Jump to content

Intel's TDP ratings for higher-end CPUs make no sense anymore

Ash_Kechummm

You wanna know why?

 

If MCE is enabled, disable MCE and watch how rigidly it stays at PL1 under extended workloads...

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, AndreiArgeanu said:

But benchmarks aren't done at base clock, most if not all are done where the CPU is boosting to the best of it's ability and where it is not held back by the motherboard.

Boost is always held back by the motherboard, the boost settings are a system configuration not a CPU configuration. There is a default boost configuration loaded in to the CPU microcode but this is overwritten as soon as you put in to a motherboard that has different default configuration i.e. Any Z series motherboard after 200 series and possibly even that generation itself.

 

An i7 10700 in an HP EliteDesk computer will boost to 224W for 28 seconds then drop down to 65W under any benchmark every time always. This will also be true for Dell and Acer or any other OEM PC targeted at general and office usage. So that means the vast majority, we're talking over 90% here, of i7 10700 CPUs will be operating using Intel default boost specifications.

 

If you were to put an i7 10700K in to these exact same systems it would boost to 229W for 56 seconds then drop down to 125W.

 

Now if you were to buy a cheap low end gaming B or H series motherboard from the brands gamers know they will and should most likely also use the Intel default boost configuration, if they do not then those brands are being stupid and reckless not Intel. Intel has no ability to prevent this other than taking away the ability to configurate boost settings but my question to you is do you really want that? Do you only want stock boost settings ever, you realize that would end all overclocking as you would be power limited and the OC would only last for 56 seconds anyway.

 

So I put it to you and everyone else that we actually want to keep this ability to change boost settings and you should be thanking Intel for allowing it, K series CPU or not.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/26/2021 at 9:29 AM, D13H4RD said:

You wanna know why?

 

If MCE is enabled, disable MCE and watch how rigidly it stays at PL1 under extended workloads...

I have a i7-4700 non-k, and MCE turned off (AsRock board.) I've never ever had a problem with it. BUT...

 

Turn on XMP and it will.

 

I'm not sure if this is still the case, oh 7 generations later and an entirely different memory type, but what I often found was that trying to min-max the performance was more likely to break the system under very specific situations (Eg power management flips out, system won't POST without a hard power cycle) that weren't encountered that often, but very annoying when they were.

 

My thought is really that, nobody should buy 14nm/10nm CPU's. I don't care if it's equivalent in performance to Samsung/TSMC's 7nm, it's clearly not in terms of TDP, and when you're selling chips that need to be rock stable, and businesses like to save money by buying cheaper power-efficient systems for it's employees that don't need a workstation class cpu in them. These are just not viable options except to replace out-of-warranty desktops, and most good businesses switched completely over to laptops since late 2019, even before the pandemic started. Anyone still using a desktop, has a business reason for it.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

My thought is really that, nobody should buy 14nm/10nm CPU's. I don't care if it's equivalent in performance to Samsung/TSMC's 7nm, it's clearly not in terms of TDP, and when you're selling chips that need to be rock stable, and businesses like to save money by buying cheaper power-efficient systems for it's employees that don't need a workstation class cpu in them. These are just not viable options except to replace out-of-warranty desktops, and most good businesses switched completely over to laptops since late 2019, even before the pandemic started. Anyone still using a desktop, has a business reason for it.

You say 14/10nm CPUs, you mean Intel right? Just say it if you mean it. At this point I'd have to again re-hash that TDP is not equal to power consumption. Both AMD and Intel CPUs can continuously exceed their TDP under stock (non-overclocked) operating conditions. 

 

Is Intel 14nm at an efficiency (under load) disadvantage compared to AMD/TSMC 7nm? For sure, assuming the tests are kept like for like. I do wonder how much power difference they would really have over a days use. The vast majority of the time will be near idle, so platform idle power probably outweighs any load usage. And focusing on the CPU doesn't really help if the monitor on the desk likely uses much more power. Let's not forget you can choose how efficient a CPU runs. OEM systems like those businesses buy will usually run at lower sustained power and therefore much higher efficiency. Enthusiasts like many here don't care and even overclock to make efficiency even worse. 

 

It is harder to do a direct comparison on 10nm since it is only really available in laptop models, but there you don't have flexibility in choosing to run at higher powers. I think it'll be interesting to keep watching this space, given Intel's announcement of 8 core models at some point there will be head to head comparisons against the also recently announced Zen 3 models. Just gotta wait a bit for both to be purchasable.

 

Also power usage and stability are not directly related. I'd go further, and have eliminated AMD Ryzen from my house. No doubt they're better in IPC and perf per watt terms, but a combination of software quirks and platform level stability is much worse than Intel. Not to say I never have problems with Intel systems, but AMD is an order of magnitude worse (both on CPU and GPU sides). Even with their process advantage for now, they're pushing way too close to the limit. Running sustained compute workloads on consumer grade AMD hardware is a recipe for pain.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, porina said:

You say 14/10nm CPUs, you mean Intel right? Just say it if you mean it. At this point I'd have to again re-hash that TDP is not equal to power consumption. Both AMD and Intel CPUs can continuously exceed their TDP under stock (non-overclocked) operating conditions. 

After Intel made clear it was going to outsource some CPU's to TSMC, that should have been a nail in the coffin of that process node for CPU's.

 

There is no reason for releasing CPU's, labeling them 65w and then having them not operate like it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Kisai said:

After Intel made clear it was going to outsource some CPU's to TSMC, that should have been a nail in the coffin of that process node for CPU's.

They said something was going to TSMC, they haven't said what. I'm sure Intel want to move on from 14nm, and we might finally see the 10SF/10SFE become mainstream for desktop CPUs end of year. Still, 14nm isn't going away until they've transitioned their manufacturing to 10 or 7nm. Think rumours implied lower end CPUs might stick around on 14nm for a while longer, with only higher performance CPUs going on the newer processes. We'll have to see how that works out.

 

1 minute ago, Kisai said:

There is no reason for releasing CPU's, labeling them 65w and then having them not operate like it.

It doesn't matter how often you or others like you try to make TDP mean that, it doesn't. Not for Intel, not for AMD.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

That 300w cpu should be illegal 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

There is no reason for releasing CPU's, labeling them 65w and then having them not operate like it.

Except on a fully-stock Intel system running without enhancements like MCE, that's what it does...

 

If it has a thermal and power budget and the workload calls for it, it will boost to PL2 for a pre-determined amount of time (Tau), then drop down to PL1 and pretty much stay there...

 

Quote

PL1 is the effective long-term expected steady state power consumption of a processor. For all intents and purposes, the PL1 is usually defined as the TDP of a processor. So if the TDP is 80W, then PL1 is 80W.

PL2 is the short-term maximum power draw for a processor. This number is higher than PL1, and the processor goes into this state when a workload is applied, allowing the processor to use its turbo modes up to the maximum PL2 value. This means that if Intel has defined a processor with a series of turbo modes, they will only work when PL2 is the driving variable for maximum power consumption. Turbo does not work in PL1 mode.

Tau is a timing variable. It dictates how long a processor should stay in PL2 mode before hitting a PL1 mode. Note that Tau is not dependent on power consumption, nor is it dependent on the temperature of the processor (it is expected that if the processor hits a thermal limit, then a different set of super low voltage/frequency values are used and PL1/PL2 is discarded).

 

So let us go on a journey where a large workload is applied to a processor.

Firstly, it starts in PL2 mode. If a single-threaded workload is used, then we should hit the top turbo value as listed in the spec sheet. Normally the power consumption of a single core will be nowhere near the PL2 value of the entire chip. As we load up the cores, the processor reacts by reducing the turbo frequency in line with the per-core turbo values dictated by Intel. If the power consumption of the chip hits the PL2 value, then the frequency is adjusted so PL2 is never exceeded.

When the system has a substantial workload applied for a length of time, in this case ‘tau’ seconds, the firmware should immediately invoke PL1 as the new power limit. The turbo tables no longer apply, as those are PL2 only.

If the workload applied results in power consumption levels above PL1, then the frequency and voltages are adjusted such that the overall power consumption of the chip is within the PL1 value. This means that the whole processor reduces in frequency from its PL2 state to its PL1 state for the duration of the workload. This means that temperatures on the processor should decrease, increasing the longevity of the processor.

PL1 stays in place until the workload is removed and a CPU core hits an idle state for a fixed amount of time (usually sub 5-seconds). After this, the system can re-enable PL2 again if another workload is applied.

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, porina said:

It doesn't matter how often you or others like you try to make TDP mean that, it doesn't. Not for Intel, not for AMD.

Last I checked, the laws of thermodynamics are still true, so power consumption ≈ heat generation, and that heat will eventually be dissipated

 

I only made this post because I like my PCs to be efficient and quiet and was not expecting Intel's chips to output literally 3.3x more heat than a number on the box labeled "thermal design power" would suggest, I didn't want to start a pointless argument

 calm down y'all

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Ash_Kechummm said:

Last I checked, the laws of thermodynamics are still true, so power consumption ≈ heat generation, and that heat will eventually be dissipated

That's not the issue, the TDP spec on CPUs does not represent power consumption and it's not what that is used for. Thermal Design Power (TDP) is no more than what it actually stands for, TDP != power usage.

 

So of course if you buy a motherboard that changes the long term boost configuration of the CPU then it's not even going to be close to the TDP of the CPU spec sheet but that has little to do with Intel or the figures being incorrect or any other such thing.

 

It's really no better than complaining the CPU runs hot but you're also overclocking it, you've changed the running parameters that were used to define the TDP of the CPU so there's no way it could be the same. Put the parameters back to those that were used and you will find long term power usage is the same or similar to the TDP, but then you also have to remember the CPU will boost higher than that as well but only short term (again if left at Intel default configuration).

 

I don't think anyone is really arguing, it is just what it is and it's worth explaining it.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/26/2021 at 7:59 PM, leadeater said:

Boost is always held back by the motherboard, the boost settings are a system configuration not a CPU configuration. There is a default boost configuration loaded in to the CPU microcode but this is overwritten as soon as you put in to a motherboard

is this an Intel thing? because my ryzen cpu pretty much adheres to the 75w (or 65?) limit... 

 

I understand that you can overwrite these limits by overclocking, but generally shouldn't a chip adhere to these numbers at default settings? it's after all also used to calculate what PSU you need, if the chip would just nilly willy use up more power, well that's not good... 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Mark Kaine said:

is this an Intel thing? because my ryzen cpu pretty much adheres to the 75w (or 65?) limit... 

You can also change it on AMD as well, no motherboards I am aware of by default change it however. I think the reason for that is AMD considers a change to the power states of the CPU as an overclock which technically invalidates they warranty, Intel does not consider such a change to be an OC so does not invalidate the warranty.

 

32 minutes ago, Mark Kaine said:

I understand that you can overwrite these limits by overclocking

For Intel changing these is not an overclock. Say for example I want to change the Tau from 28 seconds to 3 minutes I would not have touched anything to do with clocks or core multipliers at all, all I've done is extended the length of time allowed in PL2 power state.

 

32 minutes ago, Mark Kaine said:

it's after all also used to calculate what PSU you need, if the chip would just nilly willy use up more power, well that's not good... 

TDP is not used to calculate the PSU you need, under default Intel power setting for a 10700 non K it will boost to 224W which is far higher than 65W. If you make a PSU selection based on 65W then you'll be in danger of over power protection or over current protection when the CPU boosts to 224W for 28 seconds. This is the very same reason why a lot of people had problems with RTX 30 series and computers crashing, dynamic boost and dynamic power of modern CPUs and GPUs mean there is no fixed amount of power they use and they are capable of peaking much higher than any stated TDP or TGP as those only bare relation to cooling design not power circuitry requirements.

 

To correctly size a PSU you need to total up the maximum peak power and currents allowed for every component in the system and buy a PSU greater than this total or you're at risk of a protection shutdown event when you exceed the capabilities of the PSU.

 

If you delve in to the more technical power aspects of GPUs and CPUs they have maximum allowed current parameters which are used for protection of the device and impact the maximum peak power they can use, these can often be very high and you'll see power spikes on something like an RTX 3090 up around the 450W+ mark (even higher on more extreme OC designs) and that's actually far lower than the configured maximum current for device protection. As a reminder the TDP of an RTX 3090 is 350W. 

 

To give an AMD AM4 example read below:

Quote

Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket. Applications with high thread counts, and/or “heavy” threads, can encounter PPT limits that can be alleviated with a raised PPT limit.

  1. Default for Socket AM4 is at least 142W on motherboards rated for 105W TDP processors.
  2. Default for Socket AM4 is at least 88W on motherboards rated for 65W TDP processors.

Thermal Design Current (“TDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in thermally-constrained scenarios.

  1. Default for socket AM4 is at least 95A on motherboards rated for 105W TDP processors.
  2. Default for socket AM4 is at least 60A on motherboards rated for 65W TDP processors.

Electrical Design Current (“EDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in a peak (“spike”) condition for a short period of time.

  1. Default for socket AM4 is 140A on motherboards rated for 105W TDP processors.
  2. Default for socket AM4 is 90A on motherboards rated for 65W TDP processors.

https://www.gamersnexus.net/guides/3491-explaining-precision-boost-overdrive-benchmarks-auto-oc

 

AMD Zen CPUs actually have the more dynamic clocks and power, much like a GPU, than Intel does and is the least likely so have a standardized amount of power draw as the CPUs constantly look at core and package temperatures, voltages, current, and loads and boost to the highest clocks possible within the AMD algorithm. So with Intel where a 65W CPU using Intel power settings will use 65W sustained and an AMD 65W CPU under default power settings will not use 65W sustained and will be much closer to 88W.

 

So if we were to solely look at the default power settings between Intel and AMD it's actually AMD that has the least representative power usage compared to TDP. PPT is the more correct product specification for AMD if you want to know what the power of a CPU will use, but you must also account for TDC and EDC which is higher than PPT.

 

This is why people are saying TDP has nothing to do with power usage as both Intel and AMD have product specifications for that.

  • AMD: PPT, TDC and EDC
  • Intel: PL1, PL2, Tau (and some more but you get the point)

 

There's no reason to try and extrapolate power usage from TDP for Intel or AMD CPUs as there are direct product specs for that.

Link to comment
Share on other sites

Link to post
Share on other sites

TDP with Intel = averages

TDP with AMD = average on load

Lake-V-X6-10600 (Gaming PC)

R23 score MC: 9190pts | R23 score SC: 1302pts

R20 score MC: 3529cb | R20 score SC: 506cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: Intel Core i5-10600, 6-cores, 12-threads, 4.4/4.8GHz, 13,5MB cache (Intel 14nm++ FinFET) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: ASUS PRIME B460 PLUS, Socket-LGA1200 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W / RAM A1, A2, B1 & B2: DDR4-2666MHz CL13-15-15-15-35-1T "Samsung 8Gbit C-Die" (4x8GB) / Operating System: Windows 10 Home / Sound: Zombee Z300 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Seagate® Barracuda 2TB HDD / Storage 4: Seagate® Desktop 2TB SSHD / Storage 5: Crucial P1 1000GB M.2 SSD/ Storage 6: Western Digital WD7500BPKX 2.5" HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter (Qualcomm Atheros)

Zen-II-X6-3600+ (Gaming PC)

R23 score MC: 9893pts | R23 score SC: 1248pts @4.2GHz

R23 score MC: 10151pts | R23 score SC: 1287pts @4.3GHz

R20 score MC: 3688cb | R20 score SC: 489cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Ryzen 5 3600, 6-cores, 12-threads, 4.2/4.2GHz, 35MB cache (T.S.M.C. 7nm FinFET) / Display: HP 24" L2445w (64Hz OC) 1920x1200 / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: ASUS Radeon RX 6600 XT DUAL OC RDNA2 32CUs @2607MHz (T.S.M.C. 7nm FinFET) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: ASRock B450M Pro4, Socket-AM4 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W / RAM A2 & B2: DDR4-3600MHz CL16-18-8-19-37-1T "SK Hynix 8Gbit CJR" (2x16GB) / Operating System: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Western Digital My Passport 2.5" 2TB HDD / Storage 4: Western Digital Elements Desktop 2TB HDD / Storage 5: Kingston A2000 1TB M.2 NVME SSD / Wi-fi & Bluetooth: ASUS PCE-AC55BT Wireless Adapter (Intel)

Vishera-X8-9370 | R20 score MC: 1476cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Case Fan VRM: SUNON MagLev KDE1209PTV3 92mm / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: AMD FX-8370 (Base: @4.4GHz | Turbo: @4.7GHz) Black Edition Eight-Core (Global Foundries 32nm) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: MSI 970 GAMING, Socket-AM3+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W PSU / RAM 1, 2, 3 & 4: Corsair Vengeance DDR3-1866MHz CL8-10-10-28-37-2T (4x4GB) 16.38GB / Operating System 1: Windows 10 Home / Sound: Zombee Z300 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Seagate® Barracuda 2TB HDD / Storage 3: Seagate® Desktop 2TB SSHD / Wi-fi: TP-Link TL-WN951N 11n Wireless Adapter

Godavari-X4-880K | R20 score MC: 810cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 95w Thermal Solution / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Athlon X4 860K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / CPU: AMD Athlon X4 880K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / Display: HP 19" Flat Panel L1940 (75Hz) 1280x1024 / GPU: EVGA GeForce GTX 960 SuperSC 2GB (T.S.M.C. 28nm) / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: MSI A78M-E45 V2, Socket-FM2+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W PSU / RAM 1, 2, 3 & 4: SK hynix DDR3-1866MHz CL9-10-11-27-40 (4x4GB) 16.38GB / Operating System 1: Ubuntu Gnome 16.04 LTS (Xenial Xerus) / Operating System 2: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Western Digital My Passport 2.5" 2TB HDD / Storage 3: Western Digital Elements Desktop 2TB HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter

Acer Aspire 7738G custom (changed CPU, GPU & Storage)
Spoiler

CPU: Intel Core 2 Duo P8600, 2-cores, 2-threads, 2.4GHz, 3MB cache (Intel 45nm) / GPU: ATi Radeon HD 4570 515MB DDR2 (T.S.M.C. 55nm) / RAM: DDR2-1066MHz CL7-7-7-20-1T (2x2GB) / Operating System: Windows 10 Home / Storage: Crucial BX500 480GB 3D NAND SATA 2.5" SSD

Complete portable device SoC history:

Spoiler
Apple A4 - Apple iPod touch (4th generation)
Apple A5 - Apple iPod touch (5th generation)
Apple A9 - Apple iPhone 6s Plus
HiSilicon Kirin 810 (T.S.M.C. 7nm) - Huawei P40 Lite / Huawei nova 7i
Mediatek MT2601 (T.S.M.C 28nm) - TicWatch E
Mediatek MT6580 (T.S.M.C 28nm) - TECNO Spark 2 (1GB RAM)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (orange)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (yellow)
Mediatek MT6735 (T.S.M.C 28nm) - HMD Nokia 3 Dual SIM
Mediatek MT6737 (T.S.M.C 28nm) - Cherry Mobile Flare S6
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (blue)
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (gold)
Mediatek MT6750 (T.S.M.C 28nm) - honor 6C Pro / honor V9 Play
Mediatek MT6765 (T.S.M.C 12nm) - TECNO Pouvoir 3 Plus
Mediatek MT6797D (T.S.M.C 20nm) - my|phone Brown Tab 1
Qualcomm MSM8926 (T.S.M.C. 28nm) - Microsoft Lumia 640 LTE
Qualcomm MSM8974AA (T.S.M.C. 28nm) - Blackberry Passport
Qualcomm SDM710 (Samsung 10nm) - Oppo Realme 3 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

You can also change it on AMD as well, no motherboards I am aware of by default change it however.

yeah that's what I thought... because my ryzen doesn't go over a certain limit (around 75w) and it doesn't "spike" either, at least not measurably by common software. 

 

Haven't noticed with my 3070 either - the max I got was 273w (from the target of 270w) it is however supposed to draw up to 300w momentarily... And it can't be much more for both cpu and gpu either because I've used both with a 500w psu for a while, no crashes or anything and I've tested it quite a bit - I just didn't OC, because I knew I must be close to the limit the PSU could do. 

 

24 minutes ago, leadeater said:

There's no reason to try and extrapolate power usage from TDP for Intel or AMD CPUs as there are direct product specs for that.

Ok I see, it's just I always thought it's 75w because that's the max I've ever seen for my cpu... 

 

20210130_085119.thumb.jpg.133b253f4474c4f448e8fcd246d69e86.jpg

 

"idle" 

 

 

20210130_085149.thumb.jpg.3b39f5fbb33d6270a620ed67c2ebde92.jpg

 

"after 15 seconds of prime95, because I'm a wuzz, and I already know it's not gonna draw more" 😂

 

ps: of course this is assuming this shows the actual power draw, which it should imo! 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Mark Kaine said:

Ok I see, it's just I always thought it's 75w because that's the max I've ever seen for my cpu... 

That's about right, as for the peaks software like HWMonitor can't see them as they happen on an extremely small time scale, much faster than HWMonitor can detect.

 

Did you know that for every 5c you can lower your temps by the clocks will increase by 25Mhz? If you can get your temperatures down your clocks and power will actually go up. AMD's boosting and power is pretty awesome like that 😀

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Did you know that for every 5c you can lower your temps by the clocks will increase by 25Mhz? If you can get your temperatures down your clocks and power will actually go up. AMD's boosting and power is pretty awesome like that 

Thats similar to Nvidia gpu boost... but I've never actually seen it going over 4199 (which I suppose is meant to be 4200) also tested with Ryzen Master... seems there's a "hard lock" there? never tried to OC this cpu tho... too much stress and it does what it says on the box! 😛

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mark Kaine said:

Thats similar to Nvidia gpu boost... but I've never actually seen it going over 4199 (which I suppose is meant to be 4200) also tested with Ryzen Master... seems there's a "hard lock" there? never tried to OC this cpu tho... too much stress and it does what it says on the box! 😛

I don't expect you to run out and get an LN2 pot but

 

amd-ryzen-3900x_cold-scale_all.png

https://www.gamersnexus.net/news-pc/3492-ryzen-cpu-thermals-matter-coolers-and-cases

 

Was quite interesting to see this, even though once you got below 55c it's really not applicable to the every man anymore lol.

Link to comment
Share on other sites

Link to post
Share on other sites

I have a Core i5 4440, and a Xeon E3 1280V3.

 

The i5 is 'rated' for 84w, the Xeon for 82w, according to Intel's website.

 

My i5 ran just fine with the stock Intel cooler. I had assumed, that since both CPU's are based on the same architecture, that the TDP rating between the two would be an apples to apples comparison... When I put the stock cooler on the Xeon, it thermal throttled.

 

Granted, there may have been a mounting problem, the stock Intel cooler was GARBAGE back in 1853 when it was introduced, but... I'm guessing one of the plastic clips wasn't engaging, and no matter what I did, I couldn't 'fix' the mount. So I threw a Noctua NH D-15S on it, and it runs quite happy with that config.

 

On the flip side, I have a second D-15S on my Ryzen 9 3900XT. When I pull that CPU out, and toss a 5900X or a 5950X, I can be pretty sure ahead of time that the same cooler will work just fine on the new CPU, since they both have a 105w TDP.

 

Not sure what I'm going to do with the GPU. It's probably going to require watercooling. >_>

"Don't fall down the hole!" ~James, 2022

 

"If you have a monitor, look at that monitor with your eyeballs." ~ Jake, 2022

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

I don't expect you to run out and get an LN2 pot but

 

amd-ryzen-3900x_cold-scale_all.png

https://www.gamersnexus.net/news-pc/3492-ryzen-cpu-thermals-matter-coolers-and-cases

 

Was quite interesting to see this, even though once you got below 55c it's really not applicable to the every man anymore lol.

Unless they're making ln2 aios.... not a chance! 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Did you know that for every 5c you can lower your temps by the clocks will increase by 25Mhz?

well, color me surprised... 

 

I recently (yesterday) changed my cpu settings from 85% minimum /100 max to 5% minimum /100 max because I know for certain the "Ryzen 1.5v bug" is fixed in newer versions of windows, and I also got a new PSU (650w instead of 500w) nothing else was changed however (still running "windows balanced" power mode too) and... (after playing ROTTR) 

 

20210130_165311.thumb.jpg.3f6f00229fd27af408d38a5eebc5bca0.jpg

 

New record! 😂

 

those temps tho. OOF. 🤔

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/26/2021 at 11:11 PM, MageTank said:

This is absolutely correct. We recently got an MSI Z490 board in the lab that had stock power and current limits of 4096W which blew my mind. Normally, you'd see limits imposed to prevent damage from overclockers that didn't know what they were doing, or an errant spike in power delivery that went awry, but to ship boards with no power/current limit in-place as the default configuration is absolutely silly. The funny part is, when you install a CPU for the first time, you are greeted with a message that states "CPU changed, Press F1 to load Overclock, F2 to load defaults" or something along those lines, and both have unlimited power/current limits with infinite boost durations.

 

Board partners are doing anything and everything they can to squeeze out points on benchmarks and its starting to make me sick. We've seen it in the past few years with those sketchy "performance modes" in the BIOS that favored specific benchmarks and now we are seeing them go further down this rabbit hole with tweaks like this that could actually damage a customers hardware if left unchecked.

Things are very different from the days where motherboard manufacturers had to put some actual engineering effort into their boards to have them perform the best. I've got a few very old ones that exemplify this. My best being an Asus board which makes any OS feel lightning fast despite slower I/O and inferior CPU support compared to any of my other boards 1-2 years newer (one of them makes CPU perform similar to ones a few price brackets down - it is terrible).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

This is a traditional problem whenever marketeers get near science of any kind.  Medicine, automobiles, anything.  Happens everywhere.  Most often the government needs to step in and mandate a standard.  Had to do that several times just for the auto industry. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

TDP 

 

THERMAL 

Design 

Point

 

based on average for intel

based on max P-state for AMD.

 

It literally has nothing to do with any kind of power consumption, however you can translate that to BTU, the energy a cooler will dissipate over a given time period.

 

The TDP advertised is not intended to be viewed as a power consumption because most often times, processors will not consume the 105w that the cpu is rated for. 

 

So for example, an AMD processor producing 105w of energy in the form of heat would be 358BTU an hour and is the recommended design specification of which cooler you should use on your system.

 

Just because a Cpu is USING 105w (power), does NOT mean that it is dissipating that power equivalent in BTU. 

 

Hope that helps some of you people understand the wattage rating IS actually energy dissipated in the form of heat.....

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/26/2021 at 2:29 AM, leadeater said:

Like this has to be repeated numerous times for what ever reason because it just doesn't seem to hold in peoples minds, TDP specifications are for cooling designs and coolers are designed for the target device and configuration.

Oh here it is. Best answer.^^

Watt convert to BTU. 

 

Super simple. 

 

https://www.rapidtables.com/convert/power/Watt_to_BTU.html

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/26/2021 at 11:39 AM, CarlBar said:

OP:

 

07B89120-B48D-45FB-AF1D-49AF6CD16790.jpg

 

This has been a thing with intel for ages. It's like saying the sky is blue at this point.

Came in here looking for the "It always was" meme but this is close enough

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×