Jump to content

PCI Special Interest Group blames Nvidia for RTX 40 series 12VHPWR melting cables

AlTech
50 minutes ago, Holmes108 said:

 

But as I said, at the same time, maybe this is blown out of proportion? I know Steve and Nvidia were able to put a % on the failure, but the number is kind of meaningless to me. It sounds small, but is it? Is it more dangerous than a random extension cord or a Christmas tree? I just don't know. I guess that's for the experts/authorities to figure out.

To put things in perspective that was a 0.5% or 0.05% (I forget which) failure rate within only 4090 FE's. We do not have reports of any other card, be it NVIDIA or AMD, or other brands like ASUS and ZOTAC where there is a higher failure rate due to AIB's, tending to cheapening the builds, or "factory OC"'ing the cards which make it pull even more power. 

 

As more cards from more tiers end up out there, we will likely get reports of 4080, 4080 and 4070 cards "melting" the connector over time, but the "quick melt" might only be isolated to the 4090 since it can pull 600w, thus there's no room for "user" error.

 

It'll be interesting to see what Dell and HP do for their OEM cards, since they clearly don't want to be put in a situation where their customers have to warranty the system over and over. My guess is that, seeing as how much Dell loves their 30 year old computer tower designs, they may just stick with cards built to use 4x PCIe 8-pin connectors and avoid the problem entirely. It'll be the system builders like OriginPC and such that will have to make a hard decision on how to prevent this. It would not surprise me if some system builders glue the connectors, or use a pigtail extension to move the weak point in the cable away from the GPU itself.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Kisai said:

To put things in perspective that was a 0.5% or 0.05% (I forget which) failure rate within only 4090 FE's. We do not have reports of any other card, be it NVIDIA or AMD, or other brands like ASUS and ZOTAC where there is a higher failure rate due to AIB's, tending to cheapening the builds, or "factory OC"'ing the cards which make it pull even more power. 

Correct me if I am wrong, but I am fairly sure the number was 0.05% to 0.1% and that was based on GN's findings when they contacted several manufacturers and distributors.

 

Quote

According to GN, the failure rate of the 12VHPWR power connectors is quite small as it only lies between 0.05-0.1%. The figure is based on user data, the information provided by AIBs, and data from cable suppliers. GN claims that all 12VHPWR adapters regardless of the supplier and even the native PSU cables can fail depending upon several factors.

Source

 

 

So the 0.05-0.1% number should already be factoring in any potential "cheaper build" from AIBs.

 

 

For comparison, a large French retailer once reported that slightly over 2% of all their graphics cards sold were RMA'd within 1 year. So going by those numbers as they stand right now, you are roughly 2400% more likely to get a dead graphics card out of the box, than have your power connector melt. At least if we go on these numbers we got access to today.

 

I think the entire issue is very overblown.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Correct me if I am wrong, but I am fairly sure the number was 0.05% to 0.1% and that was based on GN's findings when they contacted several manufacturers and distributors.

 

Source

 

 

So the 0.05-0.1% number should already be factoring in any potential "cheaper build" from AIBs.

 

 

For comparison, a large French retailer once reported that slightly over 2% of all their graphics cards sold were RMA'd within 1 year. So going by those numbers as they stand right now, you are roughly 2400% more likely to get a dead graphics card out of the box, than have your power connector melt. At least if we go on these numbers we got access to today.

 

I think the entire issue is very overblown.

 

Interesting. Then the other main factor other than raw % is the actual potential outcome. The worse the outcome, the less tolerance you can have in the %. So melting cable, maybe a ruined card, not great. But is there much of a risk of say, starting a house fire? It feels to me like it should likely stay relatively contained to your computer case. But again, I know I'm not an expert. 

 

I also feel like it's likely overblown. If I had the means, I don't think it would stop me from buying one, I think I can say that anyways.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Holmes108 said:

 

Interesting. Then the other main factor other than raw % is the actual potential outcome. The worse the outcome, the less tolerance you can have in the %. So melting cable, maybe a ruined card, not great. But is there much of a risk of say, starting a house fire? It feels to me like it should likely stay relatively contained to your computer case. But again, I know I'm not an expert. 

 

I also feel like it's likely overblown. If I had the means, I don't think it would stop me from buying one, I think can say that anyways.

There is no real risk of house fire. Just a destroyed connector. 

 

PCI-SIGS response makes zero sense to me. Every single case has been user error. Every single one. The statements that should be made are "how do we prevent users from making this error" Not passing a blame game around, especially when Nvidia is USING the pci-sig's specified connector. What in gods name are they referring to with
 

Quote

When implementing a PCI-SIG specification, Members are responsible for the design, manufacturing, and testing, including safety testing, of their products."

Fixing the issue REQUIRES NVidia to deviate from the specification, why should deviating from spec be something on Nvidia's plate? Be that making shorter sense pins or whatever. When you have PCI-SIG spec connectors on the market from the PSU makers to card OEMs, to 3rd party cable moders, are you expecting them to in the next generation have to have PCI-sig speced connectors and Nvidia proprietary specifications in tandem and expect users who already have been shown to fuck it up get it straight? No, thats absurd.

 

I think the clip is a nothing burger myself. These users would have never gotten feedback to begin with because it wasnt pushed in to begin with. It does not mater if there was more feedback or not, they never even got to the point of that being in consideration.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Stahlmann said:

Could be blown out of proportion, but even one failure that results in a house fire coud be enough to kill people. Remember what a huge deal GN made about the NZXT case that cought fire? It was also a very small percentage of known failures afaik.

True

PC Setup: 

HYTE Y60 White/Black + Custom ColdZero ventilation sidepanel

Intel Core i7-10700K + Corsair Hydro Series H100x

G.SKILL TridentZ RGB 32GB (F4-3600C16Q-32GTZR)

ASUS ROG STRIX RTX 3080Ti OC LC

ASUS ROG STRIX Z490-G GAMING (Wi-Fi)

Samsung EVO Plus 1TB

Samsung EVO Plus 1TB

Crucial MX500 2TB

Crucial MX300 1.TB

Corsair HX1200i

 

Peripherals: 

Samsung Odyssey Neo G9 G95NC 57"

Samsung Odyssey Neo G7 32"

ASUS ROG Harpe Ace Aim Lab Edition Wireless

ASUS ROG Claymore II Wireless

ASUS ROG Sheath BLK LTD'

Corsair SP2500

Beyerdynamic TYGR 300R + FiiO K7 DAC/AMP

RØDE VideoMic II + Elgato WAVE Mic Arm

 

Racing SIM Setup: 

Sim-Lab GT1 EVO Sim Racing Cockpit + Sim-Lab GT1 EVO Single Screen holder

Svive Racing D1 Seat

Samsung Odyssey G9 49"

Simagic Alpha Mini

Simagic GT4 (Dual Clutch)

CSL Elite Pedals V2

Logitech K400 Plus

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

To put things in perspective that was a 0.5% or 0.05% (I forget which) failure rate within only 4090 FE's. We do not have reports of any other card, be it NVIDIA or AMD, or other brands like ASUS and ZOTAC where there is a higher failure rate due to AIB's, tending to cheapening the builds, or "factory OC"'ing the cards which make it pull even more power. 

It's also cos other brands don't put 12 pin power connectors and are going with double or triple 8 pin connectors just to be safe.

2 hours ago, Kisai said:

As more cards from more tiers end up out there, we will likely get reports of 4080, 4080 and 4070 cards "melting" the connector over time

Only if Nvidia continue to use 12VHPWR unmodified without additional safety measures.

2 hours ago, Kisai said:

 

It'll be interesting to see what Dell and HP do for their OEM cards, since they clearly don't want to be put in a situation where their customers have to warranty the system over and over. My guess is that, seeing as how much Dell loves their 30 year old computer tower designs, they may just stick with cards built to use 4x PCIe 8-pin connectors and avoid the problem entirely. It'll be the system builders like OriginPC and such that will have to make a hard decision on how to prevent this. It would not surprise me if some system builders glue the connectors, or use a pigtail extension to move the weak point in the cable away from the GPU itself.

 

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, AluminiumTech said:

It's also cos other brands don't put 12 pin power connectors and are going with double or triple 8 pin connectors just to be safe.

 

Not to be safe, they signed off on the same connector standards, they had no idea about this happening when they finalized their design. they just didnt end up using over 350W on their reference 7900xtx design to justify its use when PSUs in the market do not have the connector. 

 

47 minutes ago, AluminiumTech said:

Only if Nvidia continue to use 12VHPWR unmodified without additional safety measures.

As they SHOULD. they should NOT go out of standard spec and make a proprietary connector.
The spec needs a revision, not nvidia to go off on their own. That they did with the 30series should be condemned

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Somerandomtechyboi said:

IMG_20211028_174202.thumb.jpg.26d2af3691721cbc5abf139ad0f5e886.jpg

You sure about that?

Ok I'll bite, looks like you overclocked the CPU and now it's running hot. What about it?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bitter said:

Ok I'll bite, looks like you overclocked the CPU and now it's running hot. What about it?

1.7v vcore so when it pops its 100% user error.


12v rail might also be fucky, but that isnt user error.

Link to comment
Share on other sites

Link to post
Share on other sites

This is on whoever decided "let's miniaturize a connector and place more power demand on it at the same time".

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, starsmine said:

Not to be safe, they signed off on the same connector standards, they had no idea about this happening when they finalized their design. they just didnt end up using over 350W on their reference 7900xtx design to justify its use when PSUs in the market do not have the connector. 

AMD just happened to not design their cards to use the 12VHPWR connector, though AMD went for efficiency instead, not a 4 slot power hungry beast of a card. I personally don't care what the reasoning is though, the 8 pin and 6+2 pin connectors are tried and true that won't melt and ruin a graphics card.

2 hours ago, starsmine said:

As they SHOULD. they should NOT go out of standard spec and make a proprietary connector.
The spec needs a revision, not nvidia to go off on their own. That they did with the 30series should be condemned

Well if Nvidia doesn't add any extra safety measures like using some of the pins as a voltage sensor, then I doubt PCI SIG is going to revise their connector, because it seems like they don't want to accept any of the blame or want to shift the blame onto Nvidia. Some of the blame does go to Nvidia for cheaping out on the connector, most cards affected were FE cards, and Nvidia chose to use the connector instead of making the card more power efficient at stock and let the AIB's raise power limits.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, AnonymousGuy said:

This is on whoever decided "let's miniaturize a connector and place more power demand on it at the same time".

That has literally zero to do with anything.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Bitter said:

Ok I'll bite, looks like you overclocked the CPU and now it's running hot. What about it?

Actually when i was running that oc it ran all the way to 100c but that photo only captured 85c, i was just fooling around with volt and burning some garbage sample pentiums

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Somerandomtechyboi said:

Actually when i was running that oc it ran all the way to 100c but that photo only captured 85c, i was just fooling around with volt and burning some garbage sample pentiums

So how is that the manufacturers fault?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bitter said:

So how is that the manufacturers fault?

Its not?

Quote

 User error should never result in products destroying itself


They were responding to this proving it to be false. Giving an explicit example of the user error of 1.7v causing the product to destroy itself. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bitter said:

So how is that the manufacturers fault?

It was just some sarcasm about the line

17 hours ago, AluminiumTech said:

User error should never result in cables melting or products destroying itself

 

I mean some idiots gonna break even the most idiot proof things

Link to comment
Share on other sites

Link to post
Share on other sites

I do not see why nvidia themselves can't deviate from the design since the 12 pin from the 3000 series cards was their own. But of course 12vhpwr being the standard means compatibility with future products. 

 

14 hours ago, starsmine said:

I think the clip is a nothing burger myself. These users would have never gotten feedback to begin with because it wasnt pushed in to begin with. It does not mater if there was more feedback or not, they never even got to the point of that being in consideration.

 

I think the clip is the weakest link in the connector. It is not as secure as the old 8pins. Furthermore the 12vhpwr connector dislodges slightly from the socket when mounted horizontally under the weight of the cable itself.

 

If you look closely at both connectors you'll find that the actual ends of the 12vhpwr is much shorter than the older 8 pins. 

 

61wV5JCSi7L._AC_SL1200_.jpg

 

 

The 12vhpwr connector itself also gets worn in after the first or second insertions. It is very difficult to get an audible "click" out of these connectors. The most you can do to check is not being able to pull the connectors out if the clips are secured like gamersnexus suggest. However you'll find that the connectors can still dislodge an uncomfortable amount when pulling even when secured as shown in gn's video. 

 

All of this is true and repeated on my own 4090s. There is almost zero visual indicators to show if the connectors are secured. I would not deem this connector to be a hazard but it is certainly a problematic one.

 

Looking closer at the nubs of the socket, the ones on the 12vhpwr would look more like slits because they're so small compared to the ones on the 8 pin.

 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/7/2022 at 7:33 PM, AluminiumTech said:

User error should never result in cables melting or products destroying itself

that is a hell of a claim that immediately falls apart when you think about it in any capacity for even 10 seconds.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, starsmine said:

Not to be safe, they signed off on the same connector standards, they had no idea about this happening when they finalized their design. they just didnt end up using over 350W on their reference 7900xtx design to justify its use when PSUs in the market do not have the connector. 

 

Except they use full size PCBs, which means they most likely would have just slapped on another 8 pin rather than change power delivery solutions from the lower end cards. Note that the 4080's didn't use more than 350 watts, yet they still use that stupid connector.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, AnonymousGuy said:

This is on whoever decided "let's miniaturize a connector and place more power demand on it at the same time".

So I just did a quick calculation and google for the specs,  it seems most manufacturers are using 16AWG cable (17AWG is the minimum for 9.2A) with these connectors which are .7mm diameter.  Assuming the connectors are good and the cable is not recycled plastic from china,  they should indeed be able to handle 9.2A per pin.  Which would equate to 660W for the 6 plus returns. This accounts for a 2% V drop over half a metre of bunched cable. 

 

Is this pushing the limit of connectors?  maybe, to me it runs close to theoretical maximums (10% overhead),  but having said that they have been designed and tested by people with way more experience in this than you or I.  

 

 

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, mr moose said:

So I just did a quick calculation and google for the specs,  it seems most manufacturers are using 16AWG cable (17AWG is the minimum for 9.2A) with these connectors which are .7mm diameter.  Assuming the connectors are good and the cable is not recycled plastic from china,  they should indeed be able to handle 9.2A per pin.  Which would equate to 660W for the 6 plus returns. This accounts for a 2% V drop over half a metre of bunched cable. 

 

Is this pushing the limit of connectors?  maybe, to me it runs close to theoretical maximums (10% overhead),  but having said that they have been designed and tested by people with way more experience in this than you or I. 

While the specs say the Molex Mini-Fit Jr. connector is still good for the 600W GPUs, I don't think it was ever designed for 600W and that is more a fluke that it can withstand that. After all the connector has been designed way before the revolution of home computers and mainly remained in use because in the 90's there really wasn't anything as cheap and fitting for the purpose. Kind of like resurfacing old lead pipes because "they are still good, just poisonous but nothing a tiny layer of plastic couldn't fix".

 

I think it's rather disappointing that now that PCI-SIG is taking actual steps to overhaul the ATX standard or at least is doing bigger changes than in a century, that they aren't upgrading the connectors at the same time. Either way we are changing the connectors and the manufacturers need to get new crimps and consumers need to egt new cables or at least adapters, so moving to better locking, more secure and just in case higher rated connectors wouldn't hurt anyone.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, mr moose said:

So I just did a quick calculation and google for the specs,  it seems most manufacturers are using 16AWG cable (17AWG is the minimum for 9.2A) with these connectors which are .7mm diameter.  Assuming the connectors are good and the cable is not recycled plastic from china,  they should indeed be able to handle 9.2A per pin.  Which would equate to 660W for the 6 plus returns. This accounts for a 2% V drop over half a metre of bunched cable. 

 

Is this pushing the limit of connectors?  maybe, to me it runs close to theoretical maximums (10% overhead),  but having said that they have been designed and tested by people with way more experience in this than you or I.  

If I'm on this team I'm asking "why are we doing this?" What is being gained from saving marginal amounts of PCB space for the connectors?  

 

They already had 2x8pin that could have been "recalculated" to support 600W with ease instead of whatever stupid 150W limit was on paper per-connector.  Bigger connectors with a much more positive mechanical lock and higher cycle life.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, AnonymousGuy said:

If I'm on this team I'm asking "why are we doing this?" What is being gained from saving marginal amounts of PCB space for the connectors?  

 

They already had 2x8pin that could have been "recalculated" to support 600W with ease instead of whatever stupid 150W limit was on paper per-connector.  Bigger connectors with a much more positive mechanical lock and higher cycle life.

I don't understand, what do you mean by recalculated?  Current limits are a physics limitation.  The older 8 pin connectors only had 3 +ve pins and 3 commons + 2 sense pins.  So the limit is theoretically 3 x whatever the pin dimensions allow taking into account V drop and cable length.   It is quite possible that the 8 pin molex is good for 600W, but given you are pushing that 600 over half the conductors (which is why Card manufacturers are putting in two and three sets to bring the current back to what the connector can handle) you run the very real risk of over loading your PSU or overloading the cables used. Remember that even the 30 series maximum power draw was something like 360W,  it's  only been with the latest rounds of GPUs that we are seeing the need for more than 2 8 pins or one 12vhpwr connector. 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 12/7/2022 at 5:17 PM, starsmine said:

Every single case has been user error.

That's really convenient though... and who's to judge this anyway? 

 

Let's just say it was "user error" , doesn't that imply that the design Nvidia decided on is really not suitable for average users and should have therefore been designed differently? 

 

i can think of several ways to design such a connector in an unsafe way that leads to a lot of "user errors"... wouldn't that then not still be a fail design ? 

 

tbth, yes i think so ... safety critical features should be designed with ease of use and clear feedback to the user in mind. 

Which one could definitely argue Nvidia failed to do.

 

 

tldr: as you can read in this very thread, Nvidia made an unsafe design with a too short pin that doesn't give a clear feedback its actually attached properly and even then it might come off all by itself.  This is exactly the opposite of "user error" and 100% on whoever designed this fail plug...

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×