Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Majestic

AMD once again violating power specifications? (AMD RX-480)

Recommended Posts

Posted · Original PosterOP
29 minutes ago, DXMember said:

the VRMs on the board are just fine, those are practically the same VRMs that Fury cards have, the problem is probably the controller that doesn't balance the load according to spec

Well, it either has to send 100W over the 6PEG connector. Eventhough the PSU is more likely to handle it, that's still not really admirable.

They have to neuter the card to get it back to 120-150W max. Not 170.

Link to post
Share on other sites
26 minutes ago, Majestic said:

Well, it either has to send 100W over the 6PEG connector. Eventhough the PSU is more likely to handle it, that's still not really admirable.

They have to neuter the card to get it back to 120-150W max. Not 170.

almost every PSU, even non 80+ PSUs will handle it. As most manufacturers just dump everything on one large rail to cut costs, rather then set up multiple 20amp rails.

 

Either way, even if the PSU makers all locked their PCIe connectors to the pre march 2007 20amp spec. Well... 12v x 20a = 240w

 

So there is still fully possible to melt a 6 pin if you REALLY want to violate something.

 

That being said, 6 pin can handle about 150w on its own (its going to be VERY hot), so they have plenty of options. The only real limit IS infact the theoretical maximum of the shittiest made PSU cabeles on the market(s) which is generally 20AWG wires for the lowest end. Those are more then capable of handling 150w.

Higher end Corsair, Cooler Master, Seasonic, EVGA and Super-Flower PSUs use 18AWG wires, which in theory should hold 180-200w on a 6 pin. However the 6pin PEG itself will not handle much more then 150w before the plastic starts to melt.

Link to post
Share on other sites
Posted · Original PosterOP
25 minutes ago, Prysin said:

However the 6pin PEG itself will not handle much more then 150w before the plastic starts to melt.

Which is still a possibility if they decide to not neuter the card but dump everything on the PEG. It could exceed 150W during OC.

 

Hence...standards. 

Link to post
Share on other sites

PCI-e power = 75w

6 pin = 75w

 

The Rx 480 uses the power from the PCIe and 6 pin. Hence it should be 75w+75w = 150w

 

However it is drawing around 90w from the PCIe when stressed. Even reaches 100w if overclocked. More than the maximum threshold.

That leads to 90w+75w = 165w

 

The consumption of 90w on something designed to only supply 75w is concerning.

The Motherboard can get severely damaged. Not only the motherboard but things like the CPU and RAM may also get damaged.

 

If thats not bad I dont know what is.

Link to post
Share on other sites
6 hours ago, AlwaysFSX said:

Z97 iirc.

I wonder if that affects/affected my z97s SLI plus since it's basically the same

board. Do you have a link to the problem? 


PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to post
Share on other sites
35 minutes ago, RakibHasan2806 said:

PCI-e power = 75w

6 pin = 75w

 

The Rx 480 uses the power from the PCIe and 6 pin. Hence it should be 75w+75w = 150w

 

However it is drawing around 90w from the PCIe when stressed. Even reaches 100w if overclocked. More than the maximum threshold.

That leads to 90w+75w = 165w

 

The consumption of 90w on something designed to only supply 75w is concerning.

The Motherboard can get severely damaged. Not only the motherboard but things like the CPU and RAM may also get damaged.

 

If thats not bad I dont know what is.

Most mid end and high end motherboards will be fine, some 960s have spiking issues

Low end motherboards are the issue 


Silverstone FT-05: 8 Broadwell Xeon (6900k soon), Asus X99 A, Asus GTX 1070, 1tb Samsung 850 pro, NH-D15

 

Resist!

Link to post
Share on other sites
1 hour ago, Majestic said:

Which is still a possibility if they decide to not neuter the card but dump everything on the PEG. It could exceed 150W during OC.

 

Hence...standards. 

no it couldnt. the temps in the copper will rise so high that voltage loss due to rapid increase in resistance in the copper will trigger the "power OK" trigger.

 

amps go up, volts go down, any mobo made after 2008 or 2010 will cut the PSU if the mobo detects bad voltage values to any of the components. This is a standard safety feature in ANY motherboard.

 

You could however sit at 66w on the PCIe and 135w on the PEG without it triggering anything unless case airflow is so bad that the case ambient isnt allowing the PEG connectors to vent. 

Link to post
Share on other sites
Posted · Original PosterOP
21 minutes ago, Prysin said:

no it couldnt. the temps in the copper will rise so high that voltage loss due to rapid increase in resistance in the copper will trigger the "power OK" trigger.

Resulting in a system failure (even if nothing is damaged)... So that would still be an issue?

 

66+ 135 = 201W. Didn't the 480 exceed this wattage during OC?

Link to post
Share on other sites

@Majestic

A few more tests has come out and they all show the same result. It seems like it's an issue on all RX 480 cards, not just the one Tom's Hardware ended up getting.

So basically, if you don't have a high quality motherboard and a high quality PSU then it is not recommended to get the RX 480... So much for being a "value king".

 

 

The German website golem.de has tested it and their results shows the same thing as Tom's Hardware.

Spoiler

1.png2.png3.png4.png

 

The French site hardware.fr also went out, bought a card in a store and tested it. Both their review sample and the store bought card exceeds the PCIe limit.

 

 

A user at Reddit measured it as well, and got the same results.

Quote

Purchased a Sapphire 8GB RX 480 today. After reading up about this issue, I decided to test for myself. I rigged up a riser to be able to measure 12V current with an AMP clamp from both the PCI-e slot, and 6 pin connector.

This isn't anywhere near being scientific, but I think it's accurate enough to confirm the problem. Running stock clocks with stock voltage while running ethereum mining = 83w from the 6 pin connector, and 88w from the PCI-e slot. That's a violation of both ATX and PCI specs. I don't particularly mind it violating the ATX spec as a quality 6 pin connector can provide 200w without issue. The PCI-e slot, on the other hand, is an issue. I bought 4 of these cards today, and intend (intended?) on setting them up on a Rampage 5 motherboard. I don't think even a top end motherboard like that will be able to supply 352w to the PCI-e slots, even using the 4 pin Molex. Wish Asus had used a 6 pin instead..

If AMD can provide a BIOS update for the cards that forces 75% of the current through the 6 pin, problem solved. If that's not possible through software, then these cards should be recalled or they should have a warning label on them about possible motherboard damage when using crossfire.

If anyone is interested, I can test other GPUs as well with my setup. Either Hawaii or Tahiti.

 

Here is another German site, Heise, which also measured the PCIe power draw and got an average of 88 watts.

 

 

It is pretty safe to say that it is not an isolated issue caused by a fault card. All cards are most likely faulty, and it does not seem like it will be easily fixed.

 

 

These sources were gathered from this Reddit thread.

Link to post
Share on other sites
Posted · Original PosterOP

@LAwLz Thanks, added it to the OP. And yeah, that kinda defeats the purpose of the value king. Which was my main concern to begin with, it not being used in high-end systems per se, given the pricetag.

Link to post
Share on other sites

Wow, I raelly don't like what AMD is doing here. Specifications are meant to be followed and if AMD cards are not doing that, then I will not buy an AMD graphics card

Link to post
Share on other sites

someone on Reddit posted a gpu-z screenshot of their 480 using less than 150 watts average (presumably with power target untouched), and they got this response:

 

Quote

 

WizzardTPU 177 points 5 hours ago

I'm the author of GPU-Z. That's GPU only, not full board

 

 

pretty much confirms that overclocked 480 gpu-z screenshots pulling 175 watts was just the GPU die itself, and the cards are hitting 200+ watts overclocked like Tom's Hardware said.


Hello? Oh, hi world!

Link to post
Share on other sites
5 hours ago, Majestic said:

 Now that can damage mainboards, possibly. And neither can you compensate with a proper PSU.

I doubt many mainboards will be running super beefy PCI-E slot power phases.

 

 

Uhh, what "power phases" are you referring to? The 12V to 12V phase :P? Does the motherboard even do anything to the 12V that go to the PCIe slot?

 

AFAIK the 12V traces that go to the PCIe slots are DIRECT from the 12V leads on the 24pin plug...as long as the PCB can dissipate a watt or two of heat resulting from the voltage drop over those traces from pin to slot there is no issue as far as I can tell. Is this not the case? Is the 12V power "conditioned" in some way?

 

If there are direct traces from 24pin to PCIe slot, I really see this as much less of a problem. Not defending this 50/50 PEG/PCIe power draw as a good thing, I just have major doubts that motherboards are going to start bursting into flames because of rx 480.

Link to post
Share on other sites
1 hour ago, Majestic said:

Resulting in a system failure (even if nothing is damaged)... So that would still be an issue?

 

66+ 135 = 201W. Didn't the 480 exceed this wattage during OC?

during OC yes.

Then again, no manufacturer, be it Intel, AMD, Nvidia, Qualcomm, Samsung etc.. actually covers overclocking. If you read every single ToS and EULA AND warranty letters for their products they all state in one way or the other that overclocking is going to break your warranty with the manufacturer.

 

In AMDs case, if you even TRY to open the overclocking functions in their driver you get a promt saying "by overclocking the product you agree that you hereby void your warranty. Advanced Micro Devices is not liable for damages caused by overclocking this product"

 

So. Saying "during OC" is a shitty argument. Its like taking your car, boring up the cylinders and slapping on a huge ass turbo. Then blame the manufacturer when the transmission blows up.

 

Overclockin is and has been done at the users own risk. Sure sometimes the manufacturers do market parts as "great overclockers". But they still do not condone or cover the act of overclocking with their warranty.

 

And you can call it anti consumer all you want. But if you made a prouct, and i decided to use it in a way it was NOT meant to be used. you would fucking livid when i came knocking on your door demanding a new product or my money back.

Link to post
Share on other sites
2 minutes ago, Prysin said:

during OC yes.

Then again, no manufacturer, be it Intel, AMD, Nvidia, Qualcomm, Samsung etc.. actually covers overclocking. If you read every single ToS and EULA AND warranty letters for their products they all state in one way or the other that overclocking is going to break your warranty with the manufacturer.

 

In AMDs case, if you even TRY to open the overclocking functions in their driver you get a promt saying "by overclocking the product you agree that you hereby void your warranty. Advanced Micro Devices is not liable for damages caused by overclocking this product"

 

So. Saying "during OC" is a shitty argument. Its like taking your car, boring up the cylinders and slapping on a huge ass turbo. Then blame the manufacturer when the transmission blows up.

 

Overclockin is and has been done at the users own risk. Sure sometimes the manufacturers do market parts as "great overclockers". But they still do not condone or cover the act of overclocking with their warranty.

 

And you can call it anti consumer all you want. But if you made a prouct, and i decided to use it in a way it was NOT meant to be used. you would fucking livid when i came knocking on your door demanding a new product or my money back.

AMD probably would have been wise to make Wattman a separate download on their website, with a disclaimer there. Having overclocking tools built into Radeon settings is asking for trouble. A 10 year old using their parents computer isn't going to care about disclaimers. It could be easily argued by a motherboard manufacturer that AMD is encouraging overclocking with the 480, if the issue of damaged motherboards start popping up. It's amazing how many legal safeguards are nothing more than styrofoam walls that can be smashed through with precedent and negligence of due diligence.


Hello? Oh, hi world!

Link to post
Share on other sites
Posted · Original PosterOP
9 minutes ago, Prysin said:

during OC yes.

Then again, no manufacturer, be it Intel, AMD, Nvidia, Qualcomm, Samsung etc.. actually covers overclocking. If you read every single ToS and EULA AND warranty letters for their products they all state in one way or the other that overclocking is going to break your warranty with the manufacturer.

 

In AMDs case, if you even TRY to open the overclocking functions in their driver you get a promt saying "by overclocking the product you agree that you hereby void your warranty. Advanced Micro Devices is not liable for damages caused by overclocking this product"

 

So. Saying "during OC" is a shitty argument. Its like taking your car, boring up the cylinders and slapping on a huge ass turbo. Then blame the manufacturer when the transmission blows up.

 

Overclockin is and has been done at the users own risk. Sure sometimes the manufacturers do market parts as "great overclockers". But they still do not condone or cover the act of overclocking with their warranty.

 

And you can call it anti consumer all you want. But if you made a prouct, and i decided to use it in a way it was NOT meant to be used. you would fucking livid when i came knocking on your door demanding a new product or my money back.

I understand your argument, however with the competion, overclocking the shit out of it doesn't create the same issues. Plus their competition colors inside the lines with all their products during factory clocks aswell.  You're right to say that using it outside of the design specs shouldn't be supported, however when your competion is doing it... you kinda have to do it aswell.

 

r_600x450.png

 

It's pretty evident that AMD tried to squeeze more out of the enveloppe than was possible, to get it to rival the 970 and someone noticed. It will be interesting to see which option they choose. Neutering the card, changing the distribution....

 

Also, i've RMA'ed plenty of cards that were overclocked, never got any comments.

Link to post
Share on other sites

I know well in advance the war I can risk bringing to this thread alone, so I will keep this brief, and I will specifically address this to a certain few people:

 

AMD is not such an innocent company you once believed it to be, is it? Well, here is a cold wake up call; NOBODY makes this kind of oversight by accident.


Read the community standards; it's like a guide on how to not be a moron.

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Link to post
Share on other sites

If I understand this correctly, and you can absolutely correct me if I'm wrong.

But according to Toms Hardware, the entire card is pulling at peak, some 300W of power? And at that peak power draw, some 155W to 200W of that is coming from the PCIe connector on the motherboard? If that is true, then holy cow AMD, how on earth did that pass validation?

But here's the thing, I call bullshit on Toms numbers and here's why. The freaking motherboard would have fried with that much power being pulled through something only meant for 75W MAX. There is no motherboard on earth without additional power inputs that could cope with that kind of power draw, none.

Why do you think that some mobos have an extra molex connector next to the PCIe slots? Because PCIe by itself cannot cope with more than a peak power draw of 100W per slot, and definitely no more than 75W long term.

 

I'm looking at the numbers being thrown around here and the more I look, the more I think to myself that Toms either has faulty measuring equipment, or the GPU itself is faulty. 155W to 200W across a single PCIe slot should easily kill the traces on the mobo. 12V across PCIe for a max of 75W equals 6.25 Amps, 12V at 200W however equals 16.6 Amps, there's no chance a mobo can cope with that.
 

Having said all that though, if the GPU really is pulling that much power through the slot, then fuck AMD for either incompetence or malicious intent, whichever one it is, and if anyone can give me a reason for being wrong, then I'm all ears.

And by malicious intent, what I mean is the possibility that they deliberately lied about the RX480's power consumption and board design.


CPU: Core i5 2500K @ 4.5GHz | MB: Gigabyte Z68XP-UD3P | RAM: 16GB Kingston HyperX @ 1866MHz | GPU: XFX DD R9 390 | Case: Fractal Design Define S | Storage: 500GB Samsung 850 EVO + WD Caviar Blue 500GB | PSU: Corsair RM650x | Soundcard: Creative Soundblaster X-Fi Titanium
Click here to help feed our lasses Pokemon

Link to post
Share on other sites

They haven't really define what is a high quality board and psu. Even when some have high quality boards and psu, they don't want a card that's going damage it. 


Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to post
Share on other sites
Posted · Original PosterOP
1 minute ago, NumLock21 said:

They haven't really define what is a high quality board and psu. Even when some have high quality boards and psu, they don't want a card that's going damage it. 

It's probably very technical, hence why you have standards. So non-eletrical engineers can also use pcpartpicker...

Link to post
Share on other sites
1 minute ago, Majestic said:

It's probably very technical, hence why you have standards. So non-eletrical engineers can also use pcpartpicker...

What?


Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to post
Share on other sites
Posted · Original PosterOP
Link to post
Share on other sites

@Robin88 , those peaks are for VERY SHORT periods of times, something like 1 ms or maybe a few ms.  The average over the course of let's say 200 ms is about 80 watts.

 

See for example the GTX1080 review, where the card pulls over 250 watts from the pci-e 8pin connector alone for short periods of time (it averages at less than 150 watts) : http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-pascal,4572-10.html

 

r_600x450.png

 

ps. to add, these watts are pulled from 12v rail.  So basically, the pci-e standard says the maximum that should be drawn from the motherboard slot should be 60 watts on 12v and 15 watts on 3.3v - the rx 480 draws on average up to 80 watts instead of 60w. Technically, it's illegal but in practice, you're looking at a 60w / 12v = 5A  versus 80w/12v = 6.6A

It's just 1.5 A difference, not a huge deal. 

Each AWG18 wire in the ATX 24 pin connector that carries 12v can provide up to about 6-8A safely, and you have two of them in the connector, so at least 15A of 12v is available for fans and pci express slots. The cables can actually do more but the connectors themselves could overheat and/or the plastic shroud could melt/burn with more current. There's a lot of room to give more than the standard 60w to a pci express slot.

 

 

 

 

Link to post
Share on other sites
1 hour ago, LAwLz said:

@Majestic

A few more tests has come out and they all show the same result. It seems like it's an issue on all RX 480 cards, not just the one Tom's Hardware ended up getting.

So basically, if you don't have a high quality motherboard and a high quality PSU then it is not recommended to get the RX 480... So much for being a "value king".

 

 

The German website golem.de has tested it and their results shows the same thing as Tom's Hardware.

  Reveal hidden contents

1.png2.png3.png4.png

 

The French site hardware.fr also went out, bought a card in a store and tested it. Both their review sample and the store bought card exceeds the PCIe limit.

 

 

A user at Reddit measured it as well, and got the same results.

 

Here is another German site, Heise, which also measured the PCIe power draw and got an average of 88 watts.

 

 

It is pretty safe to say that it is not an isolated issue caused by a fault card. All cards are most likely faulty, and it does not seem like it will be easily fixed.

 

 

These sources were gathered from this Reddit thread.

I am glad someone caught it but at the same time it clearly shows why both AMD and Nvidia are favoring youtubers today vs actual publications: No way anybody includying LTT have the technical background, knowledge and equipment to even detect something like this. They all just talk through their PR briefing points and try to find a few rotten apples within benchmarks only but it's clearly not sufficient if we had the 3.5 + 5 cards first and now this.

 

Edit: @Majestic you should mark LAwLz post as solved and/or update the OP


-------

Current Rig

-------

Link to post
Share on other sites
Posted · Original PosterOP
3 minutes ago, Misanthrope said:

I am glad someone caught it but at the same time it clearly shows why both AMD and Nvidia are favoring youtubers today vs actual publications: No way anybody includying LTT have the technical background, knowledge and equipment to even detect something like this. They all just talk through their PR briefing points and try to find a few rotten apples within benchmarks only but it's clearly not sufficient if we had the 3.5 + 5 cards first and now this.

130 PEOPLE ARE NVIDIOTS

3,5/4 STARS

 

Youtube comment section memes </3

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×