Jump to content

NVIDIA Cheating with 3090 and 40 series. Also Intel with 12900ks

TECHNOMANCER303

Has anyone noticed the power draw of recent chips? And lack of efficiency? Seems backwards and I'm rooting for AMD. 

 

AMD consistently uses less wattage and performs at or slightly below both Nvidia and Intel in performance. 

 

 

 

Ps: Have been using team Blue and Green for years now.

Link to comment
Share on other sites

Link to post
Share on other sites

Performance comes at a cost and it's not linear. I don't see how that would be cheating. A new generation doesn't automatically imply it uses less power and since the 4000 series doesn't exist yet we can only speculate if it's a reasonable increase.

 

If AMD can deliver the same performace at less power consumption, then more power to AMD. That's the great thing about competition: if one does good, it'll incentivise the other side to do better.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

It's not "cheating," it's just a design choice. 

 

I do find it sort of amusing that power efficiency was something people always brought up as one of the reasons to buy an Nvidia card when they had the edge there but nobody brings it up in discussions of what card to get now. 

 

Nvidia made a (probably correct) calculation that nobody really cares about power draw if throwing more juice at the problem results in suitably eye-popping performance numbers to be touted. 

 

The truth is many people secretly or not-so-secretly like the idea of having a power-hungry card that needs a gigantor PSU, as long as they think it's getting them the "best" performance by whatever tiny margin. I've talked to people on subreddits and forums who would insist on buying overkill PSU's purely for e-peen and become actively hostile if you tried to show them power consumption calculators proving it was a waste. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

They are efficient, it's just that in order to get the most performance out of them, they need to push the power limits well past where the peak of the efficiency curve is. There's a reason why undervolting Nvidia cards is so popular, you can decrease the power draw by 20%+ by only dropping the performance 2-3%.

 

Intel is the same way, their efficiency is actually basically identical to AMD right now. Look at the 12400, it performs very similar to the similarly positioned 5600X, load power consumption is basically identical, and idle consumption is slightly better than the 5600x. Their top end SKUs just scale with more power consumption, so in order to get the performance crown, they need to raise the power limits in order to do so. 

 

It would be nice to see stuff a little more efficient, especially for some areas, but with power being cheap where I live and I personally preferring large cases with better cooling, it's not that big a deal personally. 

Link to comment
Share on other sites

Link to post
Share on other sites

This has gone back and forth for multiple decades. 
How do you increase performance on the same technology when your new technology is taking a while? Add power.

 

The Pentium 4 was technically slower than the Pentium III, because the Athlon XP showed up and shit on everything intel had done.

And still when the Athlon 64 dropped the Pentium D was just two Pentium 4’s stuck together and was a comparative housefire at the time, while that wattage became fairly normal later.

Then came the core2duo and core2quad which owned the entire market while AMD was trying to keep up adding more cores and higher frequency to the Phenom line. 
Intel i-series came out, here comes the AMD FX line, absolute disaster. More cores, more power, suddenly you’ve got the FX 9590 which was known to immediately destroy the VRMs on low end boards and caused multiple actual house fires.

AMD can’t keep up, intel sits on effectively the same tech for 5ish years until Ryzen comes around, suddenly there’s competition, the solution from intel? 8th gen, more cores.

Things stagnate a bit with Ryzen 2, intel goes into 9th and 10th gen with more cores, more power, Ryzen 3 shows it’s face and the 11900k is born, drawing up to 400 watts or more when overclocked just to compete with the 5800x and up.


You’re about to see the same thing happen again from one side or the other. Alder lake is really impressive and another generation of this P+E cores concept might cement intel this next generation, meaning Ryzen 6000/7000 might rely on cores + power again.

 

Nvidia and AMD have been doing the same thing for ages. And with intel showing off a gpu that seems to be on par with an rtx 3060, that’s just the first retail product disclosed, knowing full well they have products for the entire comparative lineup ready to go, there will be rushed launches from both amd and nvidia to compensate for that potential competition.

Intel has the capability to upset that duopoly with massive scale domestic development and then the same thing all 3 companies can do and have done in the past:

-more cores

-more power

 

Because the 4090 is basically that, it’s just more cores and more power but as a gpu instead of a cpu in concept. It’s not like AMD didn’t just do the same thing with the 6950XT, intel can do it just as well and pull a 2nd generation out of thin air drawing 500 watts from the wall and stomping on everyone.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Middcore said:

The truth is many people secretly or not-so-secretly like the idea of having a power-hungry card that needs a gigantor PSU, as long as they think it's getting them the "best" performance by whatever tiny margin. I've talked to people on subreddits and forums who would insist on buying overkill PSU's purely for e-peen and become actively hostile if you tried to show them power consumption calculators proving it was a waste. 

Why you gotta go write my biography like that?!

 

But seriously in a desktop I couldn't give a shit about power consumption.  The more the better.

Workstation:  14700nonk || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, AnonymousGuy said:

But seriously in a desktop I couldn't give a shit about power consumption.  The more the better.

 

Well this is a seriously silly way of thinking. There is no benefit to greater power consumption in and of itself, in fact it's a harm (albeit a very very marginal one) to you and everyone else on the planet. 

 

In their own way some PC guys are no different from rednecks in big trucks "rolling coal" to "pwn the libs" when they go by a Prius. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, TECHNOMANCER303 said:

Has anyone noticed the power draw of recent chips? And lack of efficiency? Seems backwards and I'm rooting for AMD. 

AMD isn't far off from NVIDIA and in some load scenarios even surpasses NVIDIA in power consumption (talking about top of the line): https://www.techpowerup.com/review/amd-radeon-rx-6950-xt-reference-design/35.html . So I wouldn't really call either of them efficient and depending on the use case one is more efficient than the other.

 

I don't know anything about GPU architecture stuff but I know those don't allow you to break the rules of physics and I know from high school physics that there is no system on earth that is 100% efficient. Since It also seems like the heat output is getting higher with each generation meaning more power that goes in is "wasted". Which makes it impossible at some point to get higher performance without increasing the power to get there. It's a rather crude explanation with basic physics but my point is at some point there is nothing left to do other than increasing power to get more performance.

Desktop: i9-10850K [Noctua NH-D15 Chromax.Black] | Asus ROG Strix Z490-E | G.Skill Trident Z 2x16GB 3600Mhz 16-16-16-36 | Asus ROG Strix RTX 3080Ti OC | SeaSonic PRIME Ultra Gold 1000W | Samsung 970 Evo Plus 1TB | Samsung 860 Evo 2TB | CoolerMaster MasterCase H500 ARGB | Win 10

Display: Samsung Odyssey G7A (28" 4K 144Hz)

 

Laptop: Lenovo ThinkBook 16p Gen 4 | i7-13700H | 2x8GB 5200Mhz | RTX 4060 | Linux Mint 21.2 Cinnamon

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TECHNOMANCER303 said:

AMD consistently uses less wattage and performs at or slightly below both Nvidia and Intel in performance. 

Have.... Have you seen their latest high end cards? Y'know Vega and Vega II were the GPUs that started the trends of high transient spikes that tripped a shitload of PSUs right? I used my Vega FE to heat my room in the winter, it was actually better than my 1080s at shoving heat into the room (and a bit slower to boot). Had the RVII for a bit till AMD fucked the drivers, it wasn't a light sipping card either. From what I've seen the latest cards suck a lot of wattage as well. For even older stuff, friend has a rig with an R9 290X, its a bit faster than my 780s but actually even more power hungry, so AMD cards being power monsters goes back a while.

 

AMD's Ryzen CPUs have been way more efficient than most Intel chips (almost all the past ones and all the current high tier ones, the new i3 1200 can give 6c chips a run for their money though) though, that's for sure.

1 hour ago, Middcore said:

Well this is a seriously silly way of thinking. There is no benefit to greater power consumption in and of itself, in fact it's a harm (albeit a very very marginal one) to you and everyone else on the planet. 

More power usually = more faster if you have the cooling to handle it. Most PC builders want a faster PC, so they don't mind the power draw. I've never shot for more power draw in general use (especially as it's summer rn, my 250-300W of PC - under gaming load - heats my lounge too much already), but I did shove massive power targets through my RVII to try and get the highest clocks possible for benchmarks.

43 minutes ago, Montana One-Six said:

I don't know anything about GPU architecture stuff but I know those don't allow you to break the rules of physics and I know from high school physics that there is no system on earth that is 100% efficient. Since It also seems like the heat output is getting higher with each generation meaning more power that goes in is "wasted". Which makes it impossible at some point to get higher performance without increasing the power to get there. It's a rather crude explanation with basic physics but my point is at some point there is nothing left to do other than increasing power to get more performance.

I think you mean diminishing returns? Have to pump exponentially more power for each step in performance.

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Montana One-Six said:

AMD isn't far off from NVIDIA and in some load scenarios even surpasses NVIDIA in power consumption (talking about top of the line): https://www.techpowerup.com/review/amd-radeon-rx-6950-xt-reference-design/35.html . So I wouldn't really call either of them efficient and depending on the use case one is more efficient than the other.

That page actually bring up a very valid point.

2 hours ago, TECHNOMANCER303 said:

And lack of efficiency?

Seems like they are more efficient when locked to 60 fps at the same settings as older cards.

Spoiler

power-vsync.png.c984e9d4b22a7c225defb087de4ad808.png

3090 uses less power than a 1650 and 75% of the power of a 1660S. The extra power they can pull is to make more and more frames because no one is happy with enough. They always need more.

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

My RTX 3070 goes from a 220w GPU to a 260w GPU when I allow it to clock it's self to like 2055mhz instead of 1965mhz gaining like 9 FPS or less which for the most part is never noticed in game.

 

If I under-volt it and cap it to 1940mhz I lose around 0.5 FPS from stock and power drops to like 175w.

 

 

Something like this...

 

 

 

 

Untitled.thumb.png.ad01a618200405254991d532fc584a83.png

 

167w vs 220w stock.

Cash Dump: 5800X. 32GB Ram. x570 motherboard. RTX 3070. 1440P screen.

Parts list: https://uk.pcpartpicker.com/list/RJhfPX

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/6/2022 at 3:26 PM, tikker said:

Performance comes at a cost and it's not linear. I don't see how that would be cheating. A new generation doesn't automatically imply it uses less power and since the 4000 series doesn't exist yet we can only speculate if it's a reasonable increase.

 

If AMD can deliver the same performace at less power consumption, then more power to AMD. That's the great thing about competition: if one does good, it'll incentivise the other side to do better.

I guess it goes back to the marketing and the way it is perceived by the end user. I was at MC for the 12th gen launch and people were rushing to get the new chips because of the claimed performance increases. While this was true they conveniently negated the fact that it drew signifyingly more than 11th or 10th gen. You can see LMG video or GN video about that. 

On 6/6/2022 at 3:27 PM, Middcore said:

It's not "cheating," it's just a design choice. 

 

I do find it sort of amusing that power efficiency was something people always brought up as one of the reasons to buy an Nvidia card when they had the edge there but nobody brings it up in discussions of what card to get now. 

 

Nvidia made a (probably correct) calculation that nobody really cares about power draw if throwing more juice at the problem results in suitably eye-popping performance numbers to be touted. 

 

The truth is many people secretly or not-so-secretly like the idea of having a power-hungry card that needs a gigantor PSU, as long as they think it's getting them the "best" performance by whatever tiny margin. I've talked to people on subreddits and forums who would insist on buying overkill PSU's purely for e-peen and become actively hostile if you tried to show them power consumption calculators proving it was a waste. 

As I agree for performance I care about efficiency more. The US grid was not designed for all the new tech. We still use archaic power generation and 120v wiring. 

Also "Giganator" was hilarious. 

On 6/6/2022 at 3:30 PM, RONOTHAN## said:

They are efficient, it's just that in order to get the most performance out of them, they need to push the power limits well past where the peak of the efficiency curve is. There's a reason why undervolting Nvidia cards is so popular, you can decrease the power draw by 20%+ by only dropping the performance 2-3%.

 

Intel is the same way, their efficiency is actually basically identical to AMD right now. Look at the 12400, it performs very similar to the similarly positioned 5600X, load power consumption is basically identical, and idle consumption is slightly better than the 5600x. Their top end SKUs just scale with more power consumption, so in order to get the performance crown, they need to raise the power limits in order to do so. 

 

It would be nice to see stuff a little more efficient, especially for some areas, but with power being cheap where I live and I personally preferring large cases with better cooling, it's not that big a deal personally. 

The best analogy I can think of is a 2004 terminator cobra mustang V8, 410HP, 14MPG. Or 2022 Eco boost V4, 330HP, 32MPG.

I think of it like a cars engine ICE, The industry is going towards 2-4cylider engines with turbos and hybrid designs. When there was a time we would want as many cylinders as possible and slap a supercharger on it.

 

It is the same for GPU/CPU right now we are slapping a supercharger on it and calling it good.

On 6/6/2022 at 3:31 PM, 8tg said:

This has gone back and forth for multiple decades. 
How do you increase performance on the same technology when your new technology is taking a while? Add power.

 

The Pentium 4 was technically slower than the Pentium III, because the Athlon XP showed up and shit on everything intel had done.

And still when the Athlon 64 dropped the Pentium D was just two Pentium 4’s stuck together and was a comparative housefire at the time, while that wattage became fairly normal later.

Then came the core2duo and core2quad which owned the entire market while AMD was trying to keep up adding more cores and higher frequency to the Phenom line. 
Intel i-series came out, here comes the AMD FX line, absolute disaster. More cores, more power, suddenly you’ve got the FX 9590 which was known to immediately destroy the VRMs on low end boards and caused multiple actual house fires.

AMD can’t keep up, intel sits on effectively the same tech for 5ish years until Ryzen comes around, suddenly there’s competition, the solution from intel? 8th gen, more cores.

Things stagnate a bit with Ryzen 2, intel goes into 9th and 10th gen with more cores, more power, Ryzen 3 shows it’s face and the 11900k is born, drawing up to 400 watts or more when overclocked just to compete with the 5800x and up.


You’re about to see the same thing happen again from one side or the other. Alder lake is really impressive and another generation of this P+E cores concept might cement intel this next generation, meaning Ryzen 6000/7000 might rely on cores + power again.

 

Nvidia and AMD have been doing the same thing for ages. And with intel showing off a gpu that seems to be on par with an rtx 3060, that’s just the first retail product disclosed, knowing full well they have products for the entire comparative lineup ready to go, there will be rushed launches from both amd and nvidia to compensate for that potential competition.

Intel has the capability to upset that duopoly with massive scale domestic development and then the same thing all 3 companies can do and have done in the past:

-more cores

-more power

 

Because the 4090 is basically that, it’s just more cores and more power but as a gpu instead of a cpu in concept. It’s not like AMD didn’t just do the same thing with the 6950XT, intel can do it just as well and pull a 2nd generation out of thin air drawing 500 watts from the wall and stomping on everyone.

I agree.

On 6/6/2022 at 3:58 PM, AnonymousGuy said:

Why you gotta go write my biography like that?!

 

But seriously in a desktop I couldn't give a shit about power consumption.  The more the better.

I disagree.

On 6/6/2022 at 4:05 PM, Middcore said:

 

Well this is a seriously silly way of thinking. There is no benefit to greater power consumption in and of itself, in fact it's a harm (albeit a very very marginal one) to you and everyone else on the planet. 

 

In their own way some PC guys are no different from rednecks in big trucks "rolling coal" to "pwn the libs" when they go by a Prius. 

I daily drive a gaming laptop for this reason. I only turn on the PC beast for renders. 

On 6/6/2022 at 4:40 PM, Montana One-Six said:

AMD isn't far off from NVIDIA and in some load scenarios even surpasses NVIDIA in power consumption (talking about top of the line): https://www.techpowerup.com/review/amd-radeon-rx-6950-xt-reference-design/35.html . So I wouldn't really call either of them efficient and depending on the use case one is more efficient than the other.

 

I don't know anything about GPU architecture stuff but I know those don't allow you to break the rules of physics and I know from high school physics that there is no system on earth that is 100% efficient. Since It also seems like the heat output is getting higher with each generation meaning more power that goes in is "wasted". Which makes it impossible at some point to get higher performance without increasing the power to get there. It's a rather crude explanation with basic physics but my point is at some point there is nothing left to do other than increasing power to get more performance.

For AMD its more efficient in CPU rather than GPU.

On 6/6/2022 at 5:24 PM, Zando_ said:

Have.... Have you seen their latest high end cards? Y'know Vega and Vega II were the GPUs that started the trends of high transient spikes that tripped a shitload of PSUs right? I used my Vega FE to heat my room in the winter, it was actually better than my 1080s at shoving heat into the room (and a bit slower to boot). Had the RVII for a bit till AMD fucked the drivers, it wasn't a light sipping card either. From what I've seen the latest cards suck a lot of wattage as well. For even older stuff, friend has a rig with an R9 290X, its a bit faster than my 780s but actually even more power hungry, so AMD cards being power monsters goes back a while.

 

AMD's Ryzen CPUs have been way more efficient than most Intel chips (almost all the past ones and all the current high tier ones, the new i3 1200 can give 6c chips a run for their money though) though, that's for sure.

More power usually = more faster if you have the cooling to handle it. Most PC builders want a faster PC, so they don't mind the power draw. I've never shot for more power draw in general use (especially as it's summer rn, my 250-300W of PC - under gaming load - heats my lounge too much already), but I did shove massive power targets through my RVII to try and get the highest clocks possible for benchmarks.

I think you mean diminishing returns? Have to pump exponentially more power for each step in performance.

I have to move to Antarctica just to make use of the heat coming from my PC.

On 6/6/2022 at 5:25 PM, IkeaGnome said:

That page actually bring up a very valid point.

Seems like they are more efficient when locked to 60 fps at the same settings as older cards.

  Reveal hidden contents

power-vsync.png.c984e9d4b22a7c225defb087de4ad808.png

3090 uses less power than a 1650 and 75% of the power of a 1660S. The extra power they can pull is to make more and more frames because no one is happy with enough. They always need more.

Well I do wish for peak performance when everything reaches reality level of gaming. true specs no latency high fps high resolution and Realtime rendering.

On 6/6/2022 at 7:59 PM, Unedited_Mind said:

My RTX 3070 goes from a 220w GPU to a 260w GPU when I allow it to clock it's self to like 2055mhz instead of 1965mhz gaining like 9 FPS or less which for the most part is never noticed in game.

 

If I under-volt it and cap it to 1940mhz I lose around 0.5 FPS from stock and power drops to like 175w.

 

 

Something like this...

 

 

 

 

Untitled.thumb.png.ad01a618200405254991d532fc584a83.png

 

167w vs 220w stock.

I have heard about under volting but I have never tried it. I do know if my ram drops below 1.35v everything stutters.

Link to comment
Share on other sites

Link to post
Share on other sites

someone on youtube did a test, the 3090's power draw and performance is linear with the 1080tie so it's actually faster than the previous gens on all metrics.

 

it's more efficient undervolted since it's in the exponential part of its own curve tho*, 350w is fine for me on gpu, wouldn't consider it cheating.

 

for cpu i'm leaning amd+ddr5+16cores.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

when you have to under volt...  that a issue itself.

the most powerful super computer and many other  that are coming online  are switching  to amd this time...

general that nvidia bread and butter ,(that where consumer gpu come from).

nvidia knew that had heat/power issue right about 1080 era/server gpu.

native rez for games hit a wall with tech around 360 era... hell games current gen are still not using hd as a indrusty standard due to vram cost ,bw and needed size of vram bottom end 24. really 40gb.

hell the new unreal engine. is doing a old trick with a bit of a.i. to render gaming now.

 

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, TECHNOMANCER303 said:

The best analogy I can think of is a 2004 terminator cobra mustang V8, 410HP, 14MPG. Or 2022 Eco boost V4, 330HP, 32MPG.

I think of it like a cars engine ICE, The industry is going towards 2-4cylider engines with turbos and hybrid designs. When there was a time we would want as many cylinders as possible and slap a supercharger on it.

 

It is the same for GPU/CPU right now we are slapping a supercharger on it and calling it good.

Not exactly for that analogy. Yes, the ol' school V8s with super chargers on them were hilariously inefficient, but that doesn't actually mean that the turbo 4 bangers are actually more efficient. 

 

If you drive an Ecoboost Mustang like you're supposed to drive a mustang (I.E. somewhat aggressively, not babying the throttle) you will actually be getting something more along the lines of 15-20MPG, very similar to the likes of the Mustang GT of the same year if you're driving it the same way. Turbo 4 bangers have a very weird efficiency curve to them, they are very good at RPMs between 1.5k-3k, but above that when the turbo starts to spool the fuel efficiency drops off a cliff (for a variety of reasons, one of which being that in order to prevent the engine from knocking to death they need to run very rich) and becomes about as efficient as a V8 or worse but without the gearing advantage those have, thus getting worse gas mileage. Turbos mean you're either getting power or fuel efficiency, you can't get both. If you're gonna drive a car relatively hard, you'd actually be better off getting a big NA motor if you care about MPG.

 

Plus, thanks to the aforementioned gearing advantages of the V8s, when naturally aspirated they still can get very good gas mileage, the old C5 Corvette for example will get 40MPG highway stock if you aren't dogging on it thanks to the fact that in 6th gear at highway speed the engine will only be turning ~1k RPM thanks to all the torque they've got down low. 

 

If you want a more accurate analogy, it's more like turbo charging a car. The idle efficiency stays roughly the same, though when you hammer it, it's 10-20% faster for ~50% more fuel usage (numbers aren't 100% accurate, but still). It is still a viable strategy, since a 12900K, for example, in games will still be relatively tame power wise (roughly in line with something like a 5900X), it's just that when you're at 100% load they lose any semblance of efficiency and run balls to the wall

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, xg32 said:

someone on youtube did a test, the 3090's power draw is linear with the 1080ti, so it's actually faster than the previous gens on all metrics.

 

it's more efficient undervolted, 350w is fine for me on gpu, wouldn't consider it cheating.

 

for cpu i'm leaning amd+ddr5+16cores.

Link it plz

8 hours ago, RONOTHAN## said:

Not exactly for that analogy. Yes, the ol' school V8s with super chargers on them were hilariously inefficient, but that doesn't actually mean that the turbo 4 bangers are actually more efficient. 

 

If you drive an Ecoboost Mustang like you're supposed to drive a mustang (I.E. somewhat aggressively, not babying the throttle) you will actually be getting something more along the lines of 15-20MPG, very similar to the likes of the Mustang GT of the same year if you're driving it the same way. Turbo 4 bangers have a very weird efficiency curve to them, they are very good at RPMs between 1.5k-3k, but above that when the turbo starts to spool the fuel efficiency drops off a cliff (for a variety of reasons, one of which being that in order to prevent the engine from knocking to death they need to run very rich) and becomes about as efficient as a V8 or worse but without the gearing advantage those have, thus getting worse gas mileage. Turbos mean you're either getting power or fuel efficiency, you can't get both. If you're gonna drive a car relatively hard, you'd actually be better off getting a big NA motor if you care about MPG.

 

Plus, thanks to the aforementioned gearing advantages of the V8s, when naturally aspirated they still can get very good gas mileage, the old C5 Corvette for example will get 40MPG highway stock if you aren't dogging on it thanks to the fact that in 6th gear at highway speed the engine will only be turning ~1k RPM thanks to all the torque they've got down low. 

 

If you want a more accurate analogy, it's more like turbo charging a car. The idle efficiency stays roughly the same, though when you hammer it, it's 10-20% faster for ~50% more fuel usage (numbers aren't 100% accurate, but still). It is still a viable strategy, since a 12900K, for example, in games will still be relatively tame power wise (roughly in line with something like a 5900X), it's just that when you're at 100% load they lose any semblance of efficiency and run balls to the wall

This is hilarious, fellow car guy. Nice.

 

I see your points. 

 

Idle should always be the same. if not better yoy. 

 

 

Id be interested to hook up 2-4 systems all gen over gen and Nvidia VS AMD VS Intel and hook a power meter up the the plug and run the same benchmark test.

 

Maybe an Idea for the LAB @LinusTech 

 

idk how tags work and I couldn't find luke who works in the labs building or Anthony. 

Link to comment
Share on other sites

Link to post
Share on other sites

All-MT-Benchmarks.png

https://www.forum-3dcenter.org/vbulletin/showpost.php?p=12852240&postcount=530

https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/

You can get some pretty huge power savings with both the 3090 and 12900K by just undervolting or power limiting them, and often the performance penalty is pretty small as long as you don't limit it too much.

 

Nvidia, Intel and AMD all consider that power consumption doesn't really matter when it comes to their flagships, they will gladly increase the power consumption in 30%+ to achieve 5% higher performance in those products. The 5950X doesn't use more power at stock probably because many motherboards wouldn't be able to run it and it would be way too hot for normal air cooling, and they already announced that AM5 will have CPUs with 230W PPT, so AMD is 100% going in the same direction. The 6950XT is basically the same as the 3090, similar performance and similar power consumption. The 3090Ti still is the worst offender though.

Nvidia's Ampere is a bit weird, as even lower end parts are way out of their sweet spot, some GPUs like the 3070/3060Ti are able to achieve similar performance to stock(<5% difference) while using 20% less power. AMD's RDNA2 on the other hand seems to start losing performance earlier in my experience.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, TECHNOMANCER303 said:

This is hilarious, fellow car guy. Nice.

This is a computer forum, most people who are serious enough to be on here regularly are also pretty into cars, they do share lots of the same draws (customization, crazy performance numbers, etc.). It's rarer to find someone who's not. Computer overclocking IMO is very similar to doing dyno pulls and drag race runs, except a lot more affordable. 

 

1 hour ago, TECHNOMANCER303 said:

Id be interested to hook up 2-4 systems all gen over gen and Nvidia VS AMD VS Intel and hook a power meter up the the plug and run the same benchmark test.

I do have both a 6900XT and a 3080 (long story) that I was planning on drag racing over the week end to see which would stay in my main rig. I can (if you want me to) undervolt them to be running the same general performance (I'll target the same score in something like Time Spy give or take 100 points) and see which card ends up being more efficient. The AMD card will likely be more efficient thanks to it's stupidly aggressive sleep algorithm (the core will actually turn off in between frame renders then turn back on, affectionately referred to by some as "power naps"), but it still might be somewhat interesting to see. 

 

I could theoretically do the same thing with AMD vs. Intel, I do know a guy with 12700K that I could drag race against the similarly performing 5900X, undervolt both and see which would draw less power under load, but at the same time I doubt he'd let me borrow his computer over the weekend. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/8/2022 at 11:34 PM, RONOTHAN## said:

This is a computer forum, most people who are serious enough to be on here regularly are also pretty into cars, they do share lots of the same draws (customization, crazy performance numbers, etc.). It's rarer to find someone who's not. Computer overclocking IMO is very similar to doing dyno pulls and drag race runs, except a lot more affordable. 

 

I do have both a 6900XT and a 3080 (long story) that I was planning on drag racing over the week end to see which would stay in my main rig. I can (if you want me to) undervolt them to be running the same general performance (I'll target the same score in something like Time Spy give or take 100 points) and see which card ends up being more efficient. The AMD card will likely be more efficient thanks to it's stupidly aggressive sleep algorithm (the core will actually turn off in between frame renders then turn back on, affectionately referred to by some as "power naps"), but it still might be somewhat interesting to see. 

 

I could theoretically do the same thing with AMD vs. Intel, I do know a guy with 12700K that I could drag race against the similarly performing 5900X, undervolt both and see which would draw less power under load, but at the same time I doubt he'd let me borrow his computer over the weekend. 

That would be a fun experiment. However based on this thread it would be good to have 1 a detailed page of your test ( scientifically) or a lab to test them out.

 

For education on the topic.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/6/2022 at 5:58 PM, AnonymousGuy said:

Why you gotta go write my biography like that?!

 

But seriously in a desktop I couldn't give a shit about power consumption.  The more the better.

The problem is even with the 3000 series it is difficult to cool.  I normally pick cards based on cooling performance because I hate fan noise even then I had to undervolt to get to acceptable noise levels.  Granted I didn't get what I usually go after (Strix, etc) due to lack of card availability, but it is not a cheap crappy cooler either.  Funny I didn't think $900 (for the Strix) was acceptable for a 3080 that went right out the window I'm glad I got it early!  I'm a bit worried about 4000 series noise ...

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/7/2022 at 6:18 AM, TECHNOMANCER303 said:

Has anyone noticed the power draw of recent chips? And lack of efficiency? Seems backwards and I'm rooting for AMD. 

Current gen is more efficient than last gen(majority of them). Techpowerup tested this on their review. 🤔

 

energy-efficiency.png

 

Just because they're chugging 300-400 watts, doesn't mean they're not efficient. 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/8/2022 at 2:14 AM, TECHNOMANCER303 said:

The best analogy I can think of is a 2004 terminator cobra mustang V8, 410HP, 14MPG. Or 2022 Eco boost V4, 330HP, 32MPG.

On 6/8/2022 at 4:05 PM, RONOTHAN## said:

Not exactly for that analogy. Yes, the ol' school V8s with super chargers on them were hilariously inefficient, but that doesn't actually mean that the turbo 4 bangers are actually more efficient.

I'd refer you both to a fairly old Top Gear episode that's relevant. I managed to find my TV recording of the episode (series 11, episode 1 from June 2008), so I can quote the stats. Clarkson took a BMW M3 4 litre V8 around the track for several laps and followed a 1.5L 4 cylinder Prius being driven by the Stig who was driving it as fast as possible (so that Clarkson was effectively driving it at the same speed as the Prius). The BMW did 19.4mpg vs 17.2 for the Prius. So at an equivalent speed, the more powerful car was actually more efficient.

 

That's the point @TECHNOMANCER303 that you've missed by creating this thread. You're looking at headline/max powerdraw and therefore interpreting that as poor efficiency, while ignoring that at a reduced load the parts in question will use a fraction of the maximum power draw that they can use under 100% load. You seem to have forgotten (or intentionally ignored to make a specific point), that components don't use their full power draw at less than 100% utilisation so may be just as efficient, or more efficient, than their predecessors. Just because AMD parts don't go much beyond 170W vs a 241W max for the 12900K at high loads, doesn't necessarily mean that they are more power efficient. As Clarkson put it, it's not what you use but how you use it that matters.

US Gaming Rig (April 2021): Win 11Pro/10 Pro, Thermaltake Core V21, Intel Core i7 10700K with XMP2/MCE enabled, 4x8GB G.Skill Trident Z RGB DDR4 @3,600MHz, Asus Z490-G (Wi-Fi), SK Hynix nvme SSDs (1x 2TB P41, 1x 500GB P31) SSDs, 1x WD 4TB SATA SSD, 1x16TB Seagate HDD, Asus Dual RTX 3060 V2 OC, Seasonic Focus PX-750, LG 27GN800-B monitor. Logitech Z533 speakers, Xbox Stereo & Wireless headsets, Logitech G213 keyboard, G703 mouse with Powerplay

 

UK HTPC #2 (April 2022) Win 11 Pro, Silverstone ML08, (with SST-FPS01 front panel adapter), Intel Core i5 10400, 2x8GB Corsair Vengeance LPX DDR4 @3,600MHz, Asus B560-I, SK Hynix P31 (500GB) nvme boot SSD, 1x 5TB Seagate 2.5" HDD, Drobo S with 5x4TB HDDs, Hauppauge WinTV-quadHD TV Tuner, Silverstone SST-SX500-LG v2.1 SFX PSU, LG 42LW550T TV. Philips HTL5120 soundbar, Logitech K400.

 

US HTPC (planning 2024): Win 11 Pro, Streacom DB4, Intel Core i5 13600T, RAM TBC (32GB), AsRock Z690-itx/ax, SK Hynix P41 Platinum 1TB, Streacom ZF240 PSU, LG TV, Logitech K400.

 

US NAS (planning): tbc

 

UK Gaming Rig #2 (May 2013, offline 2020): Win 10 Pro/Win 8.1 Pro with MCE, Antec 1200 v3, Intel Core i5 4670K @4.2GHz, 4x4GB Corsair DDR3 @1,600MHz, Asus Z87-DELUXE/Dual, Samsung 840 Evo 1TB boot SSD, 1TB & 500GB sata m.2 SSDs (and 6 HDDs for 28TB total in a Storage Space), no dGPU, Seasonic SS-660XP2, Dell U2410 monitor. Dell AY511 soundbar, Sennheiser HD205, Saitek Eclipse II keyboard, Roccat Kone XTD mouse.

 

UK Gaming Rig #1 (Feb 2008, last rebuilt 2013, offline 2020): Win 7 Ultimate (64bit)/Win Vista Ultimate (32bit)/Win XP Pro (32bit), Coolermaster Elite 335U, Intel Core 2 Quad Q9650 @3.6GHz, 4x2GB Corsair DDR3 @1,600MHz, Asus P5E3 Deluxe/WiFi-Ap@n, 2x 1TB & 2x 500GB 2.5" HDDs (1 for each OS & 1 for Win7 data), NVidia GTX 750, CoolerMaster Real Power M620 PSU, shared I/O with gaming rig #2 via KVM.

 

UK HTPC #1 (June 2010, rebuilt 2012/13, offline 2022) Win 7 Home Premium, Antec Fusion Black, Intel Core i3 3220T, 4x2GB OCZ DDR3 @1,600MHz, Gigabyte H77M-D3H, OCZ Agility3 120GB boot SSD, 1x1TB 2.5" HDD, Blackgold 3620 TV Tuner, Seasonic SS-400FL2 Fanless PSU, Logitech MX Air, Origen RC197.

 

Laptop: 2015 HP Spectre x360, i7 6500U, 8GB Ram, 512GB m.2 Sata SSD.

Tablet: Surface Go 128GB/8GB.

Mini PC: Intel Compute Stick (m3)

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/12/2022 at 6:10 AM, xAcid9 said:

Current gen is more efficient than last gen(majority of them). Techpowerup tested this on their review. 🤔

 

energy-efficiency.png

 

Just because they're chugging 300-400 watts, doesn't mean they're not efficient. 

Ill have to check the review

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/12/2022 at 11:52 PM, thewelshbrummie said:

I'd refer you both to a fairly old Top Gear episode that's relevant. I managed to find my TV recording of the episode (series 11, episode 1 from June 2008), so I can quote the stats. Clarkson took a BMW M3 4 litre V8 around the track for several laps and followed a 1.5L 4 cylinder Prius being driven by the Stig who was driving it as fast as possible (so that Clarkson was effectively driving it at the same speed as the Prius). The BMW did 19.4mpg vs 17.2 for the Prius. So at an equivalent speed, the more powerful car was actually more efficient.

 

That's the point @TECHNOMANCER303 that you've missed by creating this thread. You're looking at headline/max powerdraw and therefore interpreting that as poor efficiency, while ignoring that at a reduced load the parts in question will use a fraction of the maximum power draw that they can use under 100% load. You seem to have forgotten (or intentionally ignored to make a specific point), that components don't use their full power draw at less than 100% utilisation so may be just as efficient, or more efficient, than their predecessors. Just because AMD parts don't go much beyond 170W vs a 241W max for the 12900K at high loads, doesn't necessarily mean that they are more power efficient. As Clarkson put it, it's not what you use but how you use it that matters.

Ill go back and watch it. I could also argue that en EV has no efficiency at high speed vs low speed (city/highway) vs ICE that does have efficiency at high speed vs low speed. There are a lot of variables. 

 

And I guess IDC because it is expected to run low at low. I only care about when im using it or pushing it to limits. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×