Jump to content

ATX 3.0, Power Cables 12VHPWR, (3.0 GN report)

Quackers101
4 hours ago, jagdtigger said:

 

I had seen, and i did mention AMD could change their mind depending on lead times. NIVIDIA doesn't have that, the 4090 and 4080 are allready manufactured, to fix it they'd have to completely remanufacture the cards on a new design. Economically thats not viable, and time wise they just may not be able to do it, it takes time to design and validate somthing, the new 5000 series might be here by the time they could do that.

Link to comment
Share on other sites

Link to post
Share on other sites

They just need to design in some kind of pin bonding or a cable brace so you can't physically bend it close to the connector.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CarlBar said:

fix it they'd have to completely remanufacture the cards on a new design

Because they cant just direct attach cables and put the normal pci power plugs on them.... :old-eyeroll: 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, jagdtigger said:

Because they cant just direct attach cables and put the normal pci power plugs on them.... :old-eyeroll: 

 

In theory, maybe, in practise i doubt it. Remember if it doesn't meet a set industry spec anything that goes wrong with it is NVIDIA's liability. Legally it will never fly because NVIDIA at the minimum would be taking a huge risk. And thats assuming it wouldn't fall afoul of anyone's national laws, (can't see that it would but never discount it without knowing for sure, laws are weird somtimes with the things they allow and disallow).

 

1 hour ago, ewitte said:

They just need to design in some kind of pin bonding or a cable brace so you can't physically bend it close to the connector.

 

Again the video's went over that, it's not a foolproof solution, it prevents the easiest ways of putting strain on the connections but the wiring can still overstrain it depending on various variables. And thats before we forget  the clearance issues that creates in horizontal mounting,s thats a damn near slam dunk not fit for purpose lawsuit. Of course so's the melting but i expect everyone to decide to roll the dice and hope it doesn't effect too many people, which would limit the liability. Of course if it causes a fire that results in serious injury and death they're really deep in the shit. But i expect them to gamble that it won't.

Link to comment
Share on other sites

Link to post
Share on other sites

What happens when you upgrade just some parts of the whole and refuse to invent or include anything actually new. Same old cheap ass one clip locking, pin in pin connectors used since 50's (now just smaller, oh the revolution!), just new arrangement and "sensing pins". The good old Mini-Fit Jr. keeps on going

 

What so many seem to forget (apparently PCI-SIG also) that the current rating of the connector isn't just the AWG of the wire and the needed power but also the connector itself including the terminals within.

 

ATX standard is basicly build around Molex standards which is the pin and socket designed terminal and the connector around them. Just to make sure no one gets confused the "molex-connector" is fully compatible of Molex standards but is actually designed by AMP as "AMP Mate-n-Lok" 4-pin connector (there is a whole family of Mate-n-Lok connectors) while for example 24-pin ATX12V 2.x power supply connector is direct Molex standard (named "Mini-Fit Jr.") and Molex actually made a connector that was almost identical and completely same as the 4-pin Mate-n-Lok.

But either way the mindblowing part comes with a simple math: The new 12VHPWR-connector is rated for 600W and should probably a bit over, it pretty much uses the standard Molex contacts, so the math is simple 600W@12V makes 50A divided for 6 power pins (in reality the current is divided between 12 pins but pretty much for safety you consider all the power going through a pair (power+ground) going through only one of them) and you get around 8.33A/pin. The meat of this comes now, the Molex standards also state the size of the pins and their rated amperage, as in "Do not go over these" numbers: 1.57 mm 5A and the 2.36 mm 8.5A (the 2.13 mm pin is somewhere in the middle). So in the best case scenario PCI-SIG still has 0.17A to spare per pin as they probably use the same 2.36 mm pins as earlier, which is still quite a tight margin of error. For reference Molex Mini-Fit High-power connectors are rated for 13A so the whole Molex Mini-Fit starts to become outdated for modern computing.

 

Things get even better when you know that the Mini-Fit family has NEWER AND IMPROVED locking mechanisms made for the Sigma lineup which PCI-SIG graciously omitted and wanted to stay in the 50's locking mechanism because "who the fuck knows". Not to say they could have gone with the TPA2 variants which have improved support for the leaving wires or even moved to the high-current variants, Molex does offer quite a bit designs for the Mini-Fit family. But PCI-SIG kindly decided, "fuck all of that, it is too complex, we take the cheapest and oldest design and slap 4 sensing wires to it and off we go" and did they do a oopsie into their morning porridge. And the biggest brain moment comes that all those improvements to the Mini-Fit connectors have been done decades ago, they ain't anything new. If you still ain't scared about the new ATX standard, remember that they are thinking of moving the current transforming to the MB with the aim to lessen the wires, not making it safer or modernizing the ATX connectors to fit the new power needs but lessen the amount of wires.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, CarlBar said:

In theory, maybe, in practise i doubt it. Remember if it doesn't meet a set industry spec anything that goes wrong with it is NVIDIA's liability. Legally it will never fly because NVIDIA at the minimum would be taking a huge risk. And thats assuming it wouldn't fall afoul of anyone's national laws, (can't see that it would but never discount it without knowing for sure, laws are weird somtimes with the things they allow and disallow).

Nvidia probably won't do it as there is no room on their FE cards but I'm willing to bet an AIB will come out with 4x PCIe 8 Pin power plugs on it with an internally run 4x 8 Pin to 12 Pin. The solution really is not hard so long as willing to design/re-design the cooler around it which Nvidia is not willing to do because there is zero room and tolerance to do it and this would require an entirely new design.

 

Simple cable passthrough is neither hard, nor provides any risk or liability at all. Whether it's a cable or part of the card itself as a cable passthrough it's the same exact thing.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't understand WHY they had to design/use a new more compact connector that by default would be more fragile? Seems backwards.

-

I wonder if we will start seeing aftermarket "adapters" for these cables that are hard plugs with different angles pre-bent and you pick the one you need for your application.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bryan-10EC said:

I don't understand WHY they had to design/use a new more compact connector that by default would be more fragile? Seems backwards.

it would be nice and could make builds cleaner and things maybe a bit easier.

But yes, the fragile part is really ruining it all. Like why couldn't it be just a little beefier?

 

Now hardware busters and Jayz2cents have tested cables sort of, with 55-60c temps, nothing wrong and "simple" bending.

I would think the bending of cables shouldn't be much of an issue, more the bending towards the strain of the connector itself.

But what jayz2 did see was the loose wires in the unused wires for the nvidia adapter, 2 of the sense wires just be hanging in the wrap connected to nothing.

 

Now also from Guru3D, that says some of the same things. Bending works, but sometimes not.

That maybe the adapter if bended in a way that puts pressure on the connector, that the connector can become loose and put too much load on certain cables. Still more information to come from others.

https://www.guru3d.com/news-story/our-findings-12vhpwr-power-connectors-issues-likely-die-to-bending.html

Edited by Quackers101
Guru3D
Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, CarlBar said:

In theory, maybe, in practise i doubt it.

In prectice it is way safer than using a connector that is proven to be quite dangerous.....

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2022 at 12:06 AM, leadeater said:

Nvidia probably won't do it as there is no room on their FE cards but I'm willing to bet an AIB will come out with 4x PCIe 8 Pin power plugs on it with an internally run 4x 8 Pin to 12 Pin. The solution really is not hard so long as willing to design/re-design the cooler around it which Nvidia is not willing to do because there is zero room and tolerance to do it and this would require an entirely new design.

 

Simple cable passthrough is neither hard, nor provides any risk or liability at all. Whether it's a cable or part of the card itself as a cable passthrough it's the same exact thing.

 

Oh i agree AIB's will go with 4 x 8 pins if they can, (thats assuming NVIDIA hasn';t mandated the 12 pin in contracts, which i doubt, but still).And yeah NVIDIA doesn;t have a choice on the referance design, the PCB is too small to fit that many 8 pins regardless of the cooler aspect.

 

On 10/27/2022 at 12:23 AM, Bryan-10EC said:

I don't understand WHY they had to design/use a new more compact connector that by default would be more fragile? Seems backwards.

-

I wonder if we will start seeing aftermarket "adapters" for these cables that are hard plugs with different angles pre-bent and you pick the one you need for your application.

 

Space. NVIDIA wanted a smaller PCB.

 

17 hours ago, jagdtigger said:

In practice it is way safer than using a connector that is proven to be quite dangerous.....

 

Safer in terms of damage and loss of life, sure. Safer for NVIDIA's bottom line. Not really. Again it comes back to liability. Legally speaking the 12 pin could turn out to be such a turd it spawns a class action lawsuit over property damage and deaths from it going up in smoke and also results in government mandated mass recalls of all hardware using it. But NVIDIA is completely safe, they didn't write the spec or validate it so they have zero legal liability here.

 

Any fix they can come up with on their own quickly won't go through extensive validation, (there simply isn't time), which combined with the fact it's NVIDIA's own thing puts all liability on them if anything goes wrong. Remember NVIDIA is a business, protecting themselves and their income first is priority number 1.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, CarlBar said:

Safer for NVIDIA's bottom line. Not really. Again it comes back to liability.

There are ton of ways to safely execute this but you seem to repeat the same things like a broken record.....

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, jagdtigger said:

There are ton of ways to safely execute this but you seem to repeat the same things like a broken record.....

 

Because it's the truth. Companies are in business to make money, they don't as a rule risk anymore litigation than they have to. Weather there is a safe way to do it is entirely seperate to weather it opens NVIDIA up to litigation.

 

And as i pointed out nothing NVIDIA can come up with on their own quickly will be properly validated. In some legal codes that means anything whatsoever goes wrong with the cable, NVIDIA is liable because they didn't do due diligence.

 

Based on the revelations about the construction and the fact PCI-E-SIG saw issues in their lab i'd guess NVIDIA are using a PCI-E-SIG spec cable, whilst the tougher ones are in house designs validated by the respective companies. Those companies specialise in cables, 9amongst other things), and the validation and liabilities associated with that are their business so they accept the vulnerability to litigation as part of their business model. 

 

Now that some alternatives that have been validated and do apparently solve the issue have been shown to exist we'll probably just see NVIDIA switch to one of their cables. They're safe from any legal lowback that might occur, (not that i'd expect any if they do completely fix it)

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, CarlBar said:

Because it's the truth. Companies are in business to make money, they don't as a rule risk anymore litigation than they have to. Weather there is a safe way to do it is entirely seperate to weather it opens NVIDIA up to litigation.

You might want to explain a couple of things then. First the proprietary Nvidia 12 pin connector and secondly how a spec complaint cable or cable converter is such a high liability risk.

 

Manufacturing defects happen, design defects happen and neither of which is more or less of a problem and high liability risk whether it's a proprietary design or an open industry specification. Any and all manufacturing and design defects have the same liability risk based on the impact, damage, risks etc.

 

Nvidia making converter cables under the PCI-SIG spec doesn't make them in any way less of a liability risk, not unless the source of the defect is the specification itself which it is not.

 

So Nvidia could simply design and recall swap every converter cable and be at no extra risk at all, in fact less because it would likely not have the issue. Nvidia could also update the AIB requirements on card design and force them to have two variants of everything, a 12 Pin model and a 4x 8 Pin model with internal passthrough converter and not allow them to ship 12 Pin converters with the 12 Pin model variant which again wouldn't put Nvidia at any extra liability risk.

 

But either way none of this addresses the issue of the past Nvidia proprietary 12 Pin connector they designed themselves. If such a thing was such a HUGE risk then they would never have done it. Since they did it then it's really not as big of a deal as you're making out, that or making a change to the current 12 Pin PCI-SIG cables.

 

At no point would the PCI-SG spec have been undercooked for the cable diameter or the pin contact size. Both of these would have always been able to handle the required current loads etc. However issues like these are more complicated than looking solely that those and wouldn't you know it the issue had nothing to do with those two factors at all.

 

I still personally maintain the connector size itself is too small and the socketing depth is too shallow. Had these been larger then manufacturing defects would have been less likely as there would have been more room and tolerance when constructing cables and connectors, and the extra socketing depth would make cable strain lesser of a problem.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

I still personally maintain the connector size itself is too small and the socketing depth is too shallow. Had these been larger then manufacturing defects would have been less likely as there would have been more room and tolerance when constructing cables and connectors, and the extra socketing depth would make cable strain lesser of a problem.

 

Absolutely, 110% agree there.

 

 

As far as liability, from having paid an amateur interest in various safety screwups here's a rough overview of the causality chain as i understand it, bearing in mind different legal codes in different countries will mess with this.

 

Also note, given we now know alternatives allready exist that others have validated, my original chain of thought that spawned the opinion i'm explaining is invalidated, NVIDIA can simply go to someone else for replacements knowing said entity has a better product thats been properly checked. I'm simply typing up the below to hopefully help eple understand the thought chain i was having.

 

 

If somthing goes wrong and it gets to court the first question thats going to be asked is who's at fault. The largely comes down to one of three problems, (if more than one entity made mistakes they'll share the blame).

 

The first is a bad design. This tends to break down to either a poor specification, or a poor implementation of that specification, (this can be failing to follow things explicitly laid out, or doing somthing that whilst not specified against in the spec, i a known problem causer when used a specific way) . Sometimes both those steps are done by the same entity, somtimes they're not. Both the spec writer and the design group are expected to validate that what they've laid out is adequate for the intended use case and also safe to use in said use case, including adequate safety margins.

 

The second is bad manufacturing. Just because you pass a good design to the manufacturer doesn't mean the final product out will perform the way it did in design, manufacturers make tiny detail changes to ease production, and they're supposed to check along the way that it doesn;t screw anything up, but somtimes they mess this up, (See the issues the british army had with i think it was the L96 due to the outsourcing of manufacture). Again somtimes the designer and the manufacturer can be the same person.

 

The third is the retailer, they're expected to check that whatever they're selling, if subject to safety regulations, is in line with those, but they usually don't have to go beyond contractual guarantees and basic common sense look at levels.

 

In general as you move from designer down the degree of checks needed to meet due diligence on somthing goes down. Thus a retailer looking to stock a new product can do their checks quickly, but a new design or a new manufacturer has a much greater lead time induced by all he checks they need to do. The other catch is that in some locales failing to do due diligence can make you liable for almost anything that goes wrong with a product, even if said thing had nothing to do with the skipped steps in the due diligence process.

 

Also if one company that is contracting another pushes the contracted company to skip stuff they can also be liable even if they didn't do anything wrong on their part of the due diligence.

 

 

 

Now i don't know for sure where the NVIDIA supplied adaptors came from but i doubt NVIDIA designed or manufactured them, themselves. Most likely they contracted with a third party to supply NVIDIA branded cables of said third parties design. Much as they do for everything else on their GPU's except the GPU die itself and their founders edition coolers, (and they still don't manufacture those, just design them). Given PCI-SG saw similar issues in the lab i'd have to assume they either share a suppliers, or NVIDIA's supplier used the same design as a starting point as whoever made the cables PCI-SG was using internally when running tests. 

 

My argument, (bearing in mind i hadn't read the bits about the cable constructions e.t.c.at the start of that), was that to fix it would require a new design and the elad time to do that properly is simply too long to be practical. Due diligence testing from design board to the customers hands takes a while, NVIDIA would be close to releasing the 5000 series by then. That means either they or whoever they contract to for the replacement would have to cut corners and that leaves everyone legally vulnerable. And if NVIDIA did it in house, even if they could somehow get the validation done super fast whilst getting it done right, if they miss anything it all comes back to them, they miss anything, even a reasonable somthing, they're on the hook. Whilever they're buying from a third party, provided they do some very basic steps any liability goes back to the manufacturer, the designer, or at worst the specification setter, (which i assume is PCI-SG).

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, CarlBar said:

Because it's the truth.

More like a convenient excuse..... :old-eyeroll: If NV had any meat in their head they wouldnt use a connector that has 0 support in current PSU's. Plus they could just easily made a temporary daughter board that screws on the back of the card and soldered directly into the place of this flimsy connector so it could be converted to the existing connectors without creating a fire hazard.

Link to comment
Share on other sites

Link to post
Share on other sites

rabbit hole goes deeper and a bit older news.

Different cables, how the cable was made, Nvidia going out to vendors etc.

Some adapters, not sure if I understood PCworld if some could be buyed from wish? (or about other items). GN, @aschilling and others trying to reach out to buyers and their adapters. 150v or 300v, from the 450w rated (by missing sense pins) to 600w cables. if some people are getting bad adapters etc. note the rating might not matter, the bad batch or whatever it is could be bundled with any of them (until we know more).

 

podcast from PCworld.

https://www.youtube.com/watch?v=hRK3r3Tw7VI

 

Link to comment
Share on other sites

Link to post
Share on other sites

jonny guru, I guess it's his account? also didn't find anything wrong when testing adapters.

With some older statements around the topic. Also not to say that the adapters was ever the core issue, but is added complexity and with different quality of adapters that may be sold.

As for "incorrect usage" seems a bit off, but also could be another case of issues to deal with, again why the connector wouldn't have a "proper fit" and connection from the start? Also that the cable or connector might get an update towards december? maybe.
 
MSI also adding their "tips and tricks", if it was a tutorial it would seem a bit, eh. And some of it can help, and some of it seem a bit pointless. but do wonder if they have experienced something when it comes to poor contact too?
Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

Update 2.1

 

NTK vs Astron, and some of the differences and annoyances to deal with this "new standard".

https://www.igorslab.de/en/good-or-bad-adapter-different-12vhpwr-adapter-for-nvidias-geforce-rtx-4090-and-where-you-can-see-backgrounds-investigative/

*deleted*

 

another side note ugh, by future update it could mean (if it happens)

Quote

Conversely, this means that the graphics card will no longer start without the first two sense pins being assigned or recognized

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/26/2022 at 11:35 AM, CarlBar said:

Again the video's went over that, it's not a foolproof solution, it prevents the easiest ways of putting strain on the connections but the wiring can still overstrain it depending on various variables. And thats before we forget  the clearance issues that creates in horizontal mounting,s thats a damn near slam dunk not fit for purpose lawsuit. Of course so's the melting but i expect everyone to decide to roll the dice and hope it doesn't effect too many people, which would limit the liability. Of course if it causes a fire that results in serious injury and death they're really deep in the shit. But i expect them to gamble that it won't.

It is absolutely fixable by making sure the cables are designed better.  This is why I've decided against getting a new PSU.  There will be minor changes to the design to make this less and less of a problem.

 

Also - There is a risk of cables melting, it has always existed.  Chances of an actual fire is way, way lower.  When I had a melted cable on my 2080 all I had to do was replace the cable.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Quackers101 said:

+ Nvidia may be able to try and void warranty by using the "wrong cable" if experiencing problems with other than native cable. depends on if that is actually anything but be aware.

No they don't. 
I see the verbiage you are talking about, but no Nvidia is not doing that. Tha'ts there for people crosswiring it or shoving a 10 pin in rather then a 12vhpr, not using a cable mod, or corsair, or whatever cable. That verbiage has been there for a decade or more.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, starsmine said:

No they don't. 
I see the verbiage you are talking about, but no Nvidia is not doing that. Tha'ts there for people crosswiring it or shoving a 10 pin in rather then a 12vhpr, not using a cable mod, or corsair, or whatever cable. That verbiage has been there for a decade or more.

well you never know, it might not happen. If it does, it would be something to push back. Like sure most software and at times with hardware too, a bit of wishy-washy or obscure legal language that could be anti-consumer if used in a different way. As with most EULA's that use a set of language you just have to trust the company in respecting or hold. "That's there for people crosswiring it or shoving a 10 pin in rather then a 12vhpr", did it specify that specificly or just how you hope it is what the language would mean? Sure, would expect it not being taken that way either, just to be aware is all I said.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Quackers101 said:

well you never know, it might not happen. If it does, it would be something to push back. Like sure most software and at times with hardware too, a bit of wishy-washy or obscure legal language that could be anti-consumer if used in a different way. As with most EULA's that use a set of language you just have to trust the company in respecting or hold. "That's there for people crosswiring it or shoving a 10 pin in rather then a 12vhpr", did it specify that specificly or just how you hope it is what the language would mean? Sure, would expect it not being taken that way either, just to be aware is all I said.

No, that isnt what I "hope" it would mean, thats how it works, and that is also how Magnusson Moss works if it ever does go that far. But it never has. This isn't new language. 

People reacting to this are the same people reacting to PCIe plugs having a rating of 30 insertions, its OLD news and not relevant news.

Link to comment
Share on other sites

Link to post
Share on other sites

Basically user error, and a design oversight to allow that user error to happen. but it still is USER ERROR, not a problem with the connector itself.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×