Jump to content

PCI Special Interest Group blames Nvidia for RTX 40 series 12VHPWR melting cables

AluminiumTech
 Share

Summary

The PCI Special Interest Group (hereafter  PCI SIG) have responded to criticism of the new 12VHPWR 12 pin PCI-E power connector melting in PCs where the cable is poorly attached to the graphics card.

 

 

PCI SIG have stated that it is up to each implementer to design proper safety measures and to test those safety measures in their products and that they, and not PCI SIG, are responsible for this, effectively throwing Nvidia under the bus for not having done sufficient testing to prevent the melting cable issues experienced by customers of RTX 40 series graphics cards.

 

Quotes

Quote

"Members are reminded that PCI-SIG specifications provide necessary technical information for interoperability and do not attempt to address proper design, manufacturing methods, materials, safety testing, safety tolerances, or workmanship," the statement reads. "When implementing a PCI-SIG specification, Members are responsible for the design, manufacturing, and testing, including safety testing, of their products."
Advertisement

 

My thoughts

Honestly it makes sense that Nvidia is to blame. User error should never result in cables melting or products destroying itself. This is product safety 101. Hopefully this causes Nvidia to implement a software or hardware fix such as working to have the 12VHPWR cables recalled and replaced with a system where power isn't delivered at all if the connector doesn't click in all the way.

 

Sources

 https://arstechnica.com/gadgets/2022/12/pci-standards-group-deflects-assigns-blame-for-melting-gpu-power-connectors/

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 7T (Mid 2021 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

Samaritan XTXH (Early 2021 Upgrade - AMD Ryzen 9 3900XT (12C/24T)  (2021) , MSI X370 Gaming Pro Carbon,  32GB DDR4-3200 (Late 2022) ,  AMD Radeon RX 6700XT 12GB (Mid 2021), Corsair RM850i PSU, Noctua NH-D15 CPU Cooler (2021), Samsung 860 EVO 500GB SSD, Corsair MP510 1.92TB NVMe SSD , NZXT S340 Elite, beQuiet Pure Wings 2 120mm, Corsair ML120 x2 (2021)

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, AluminiumTech said:

Members are reminded that PCI-SIG specifications provide necessary technical information for interoperability and do not attempt to address proper design

Hmm, that's a bit of a cheap answer in my opinion and I must agree with the article saying

Quote

This statement appears designed to absolve the PCI-SIG of any blame in the melting-power-connector saga

I agree that members are responsible for testing their products, but (unless they already do this), PCI-SIG can definitely come up with design guidelines to help with that.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

Wait... So is PCI SIG saying that the design of the connector is not their responsibility? That can't be right. Surely things like the retention mechanism should be part of the standardized design in order to ensure that it actually works as intended even when mixing and matching products from different vendors. 

 

The testing and quality control I can understand as being up to the manufacturer, but part of the issue (the small issue I might add, there are very few reports of cables being burnt) is the design not giving enough feedback that the cable is fully secured. Surely that should be part of the standardized design and not left up to individual manufacturers to figure out. 

Link to comment
Share on other sites

Link to post
Share on other sites

The classic "no u". Sure, let's shift the blame instead of trying to figure out how to solve the problem at hand.

About monitor marketing BS

 

CPU: AMD Ryzen 5 5600X - Motherboard: ASUS ROG Strix B550-E - GPU: PNY RTX 3080 XLR8 Epic-X - RAM: 4x8GB (32GB) G.Skill TridentZ RGB 3600MHz CL16 - PSU: Corsair RMx (2018) 850W - Storage: 500 GB Corsair MP600 (Boot) + 2 TB Sabrent Rocket Q (Storage) - Cooling: EK, HW Labs & Alphacool custom loop - Case: Lian-Li PC O11 Dynamic - Fans: 6x Noctua NF-A12x25 chromax - AMP/DAC: FiiO K5 Pro - OS: Windows 10 Pro - Monitor: LG C2 OLED 42" - Mouse: Logitech G Pro - Keyboard: Logitech G915 TKL - Headphones: Beyerdynamic Amiron Home - Microphone: Antlion ModMic

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Stahlmann said:

The classic "no u". Sure, let's shift the blame instead of trying to figure out how to solve the problem at hand.

Wasn't it already confirmed that the issue is because of user error - not inserting the adapter correctly? 

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, LAwLz said:

Wait... So is PCI SIG saying that the design of the connector is not their responsibility? That can't be right. Surely things like the retention mechanism should be part of the standardized design in order to ensure that it actually works as intended even when mixing and matching products from different vendors. 

No, they're saying the manufacture of it and products using it isn't their responsibility. Too me it seems like they're just deflecting the issue because it's (the connector) lacking a securing mechanism.

 

2 minutes ago, LAwLz said:

The testing and quality control I can understand as being up to the manufacturer, but part of the issue (the small issue I might add, there are very few reports of cables being burnt) is the design not giving enough feedback that the cable is fully secured. Surely that should be part of the standardized design and not left up to individual manufacturers to figure out. 

Welcome to the current state of "too big to fail" corporations. Everyone just goes "above my paygrade" and ignores things that they aren't tasked with.

 

It's pretty clear there is blame to share:

PCI-SIG for not making the connector deeper/have better retention mechanisms

NVIDIA for shipping adapters that are wired in different ways.

NVIDIA for building cards that don't check for continuity/resistance on all pins before pulling power from them

 

There are other things on the PC that are poor designs, like the USB3 header (Which also has no retention mechanism), but we don't expect to pull 600w across them.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, BetteBalterZen said:

Wasn't it already confirmed that the issue is because of user error - not inserting the adapter correctly? 

Yeah but instead of trying to overhaul the connector to prevent that now everyone says it's not their fault.

About monitor marketing BS

 

CPU: AMD Ryzen 5 5600X - Motherboard: ASUS ROG Strix B550-E - GPU: PNY RTX 3080 XLR8 Epic-X - RAM: 4x8GB (32GB) G.Skill TridentZ RGB 3600MHz CL16 - PSU: Corsair RMx (2018) 850W - Storage: 500 GB Corsair MP600 (Boot) + 2 TB Sabrent Rocket Q (Storage) - Cooling: EK, HW Labs & Alphacool custom loop - Case: Lian-Li PC O11 Dynamic - Fans: 6x Noctua NF-A12x25 chromax - AMP/DAC: FiiO K5 Pro - OS: Windows 10 Pro - Monitor: LG C2 OLED 42" - Mouse: Logitech G Pro - Keyboard: Logitech G915 TKL - Headphones: Beyerdynamic Amiron Home - Microphone: Antlion ModMic

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Stahlmann said:

Yeah but instead of trying to overhaul the connector to prevent that now everyone says it's not their fault.

People need to insert them cables right 😂

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

PCI-SIG just became apple,  they Basically told NVIDIA they were holding it wrong.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, BetteBalterZen said:

People need to insert them cables right 😂

Thing is the nvidia cable you can insert correctly and clip it but still be crooked enough to have a bad connection. Something that cant be done one cables from the likes of corsair as their cable demands a closer snugger fit.

 

So either the clippjng mechanism isnt fully stabdardized, nvidia cheaped out and accepted higher tolerances than they should have in manufacturing (card and/or cable) or people just got unlucky

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, jaslion said:

Thing is the nvidia cable you can insert correctly and clip it but still be crooked enough to have a bad connection. Something that cant be done one cables from the likes of corsair as their cable demands a closer snugger fit.

 

So either the clippjng mechanism isnt fully stabdardized, nvidia cheaped out and accepted higher tolerances than they should have in manufacturing (card and/or cable) or people just got unlucky

Hmm, well, after watching GNs video on this, still seems like the fault is on the user? 

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, BetteBalterZen said:

Hmm, well, after watching GNs video on this, still seems like the fault is on the user? 

I do think some are but also some arent.

 

There is also like not really a guide on what to do and what not to do with the cable. So most people will just assume its the same as before or not know better.

 

Which is partly user error but very much so on the manufscturer on not including anything about how to correctly install

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, LAwLz said:

Wait... So is PCI SIG saying that the design of the connector is not their responsibility? That can't be right. Surely things like the retention mechanism should be part of the standardized design in order to ensure that it actually works as intended even when mixing and matching products from different vendors. 

 

The testing and quality control I can understand as being up to the manufacturer, but part of the issue (the small issue I might add, there are very few reports of cables being burnt) is the design not giving enough feedback that the cable is fully secured. Surely that should be part of the standardized design and not left up to individual manufacturers to figure out. 

It's more like they're saying that they only give you the bare minimum guidelines for things to be compatible with each other, whether they are up to snuff in terms of safety and capability is then up to you. They could certainly do things to minimize this problem but, according to them, it's not their responsibility to do so. As long as the connector standard conceivably allows for the required load then they consider their job done.

 

This could well be true but also not, depending on what agreements are in place between manufacturers and the SIG. Given that nvidia spearheaded the push for this I also think it's reasonable to expect them to test their stuff... and if they had a problem with the connector design surely they could have said so before the standard was finalized. Nvidia has kind of dug its own grave here, though it's not really a big deal. If it turns out nvidia complained about the design but were not listened to then yeah, that would be the SIG's fault.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BetteBalterZen said:

Wasn't it already confirmed that the issue is because of user error - not inserting the adapter correctly? 

The more people use a process, the more robust it needs to be. In this case the connector in question has almost no auditory feedback when connected properly, and no Virginia a gain boosted microphone 3mm away from the connection doesn't count, and no physical feedback or visual difference between connected and unconnected. Thus, when performing cable management, the connector can start to wiggle loose a few mm. At that point in time, especially given the design of the cards, it is extremely difficult to notice the loose connector inside your case. However, even when loose, it still functions to provide power right up until it melts the connector. It is a shit connector design for large scale use. GN's advice is to connect the power cables before you slot the card in and then steadily pull on the power cable while wriggling it back and forth to ensure a proper connection, which I've never seen anyone do when building in an actual case in 20 years. I suppose it might be more common practice in building in extremely small formfactor cases.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, jaslion said:

I do think some are but also some arent.

 

There is also like not really a guide on what to do and what not to do with the cable. So most people will just assume its the same as before or not know better.

 

Which is partly user error but very much so on the manufscturer on not including anything about how to correctly install

Hmm yeah good points... 

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, ravenshrike said:

The more people use a process, the more robust it needs to be. In this case the connector in question has almost no auditory feedback when connected properly, and no Virginia a gain boosted microphone 3mm away from the connection doesn't count, and no physical feedback or visual difference between connected and unconnected. Thus, when performing cable management, the connector can start to wiggle loose a few mm. At that point in time, especially given the design of the cards, it is extremely difficult to notice the loose connector inside your case. However, even when loose, it still functions to provide power right up until it melts the connector. It is a shit connector design for large scale use. GN's advice is to connect the power cables before you slot the card in and then steadily pull on the power cable while wriggling it back and forth to ensure a proper connection, which I've never seen anyone do when building in an actual case in 20 years. I suppose it might be more common practice in building in extremely small formfactor cases.

Also good points

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, ravenshrike said:

GN's advice is to connect the power cables before you slot the card in and then steadily pull on the power cable while wriggling it back and forth to ensure a proper connection, which I've never seen anyone do when building in an actual case in 20 years. I suppose it might be more common practice in building in extremely small formfactor cases.

Nah, the problem with trying to deflect to "user error" is that it's usually not the "user", it's the builder.

 

The builder might build it 100%, but then when it's shipped to the user, it's loosened.

 

You don't want builders having to epoxy/superglue the cable to ensure that stays put.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Kisai said:

Nah, the problem with trying to deflect to "user error" is that it's usually not the "user", it's the builder.

 

The builder might build it 100%, but then when it's shipped to the user, it's loosened.

 

You don't want builders having to epoxy/superglue the cable to ensure that stays put.

 

Even then those still come loose. Ive seen it happen again and again. That or if too much force is used the connector gets damaged. Very common for usb 3.0 header and usb headers on boards.

 

Now with nvidia I see it being a big issue with system integrators as it shows how little is needed for a small amount of people to have disasterous issues.

 

What we also have to keep in mind is that the 16pin cable is THE new cable so the more cards are released with it the more reports we'll get especially when the usual 1000$ class prebuilts will start recieving them.

 

The fact that a power cable can eveb have this issue is worrying. Especially as its meant to replace a tried and true connector.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BetteBalterZen said:

Hmm, well, after watching GNs video on this, still seems like the fault is on the user? 

I disagree. Just because the risk can be greatly reduced by the customer (even possibly eliminated) doesn't automatically make it the customers 'fault'. A company can make something easier and easier for a customer to use "wrong", and eventually that's gonna fall on the company. 

 

It's a reasonable debate to have to decide where that line is in each situation, and what % is on the consumer in this particular case. I'm just saying in a general sense, I can come up with all kinds of scenarios, real and imagined, where a consumer can use something "wrong", and it still be gross negligence by the company for allowing it to happen so easily.

 

So if there's decades of history where every day Joe's can plug in computer components, and not start a fire, and suddenly there's a product where plugging it in not quite right can start a fire, I don't see how you point at the consumer. We're not talking about ignorant people trying to wire a fuse box.

 

I don't claim to know if the failure/burn % is out of whack compared to other home electronics... so that's the big question to me. Is this a greater than usual hazard? Or within reasonable limits and just blown out of proportion. That's what I'm curious about. But assuming this is a defect that's failing to an unacceptable degree, then I don't blame the consumer virtually at all.

Link to comment
Share on other sites

Link to post
Share on other sites

Best-Spider-Man-Memes.png

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280, Asus Zenscreen OLED MQ16AH

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Holmes108 said:

I disagree. Just because the risk can be greatly reduced by the customer (even possibly eliminated) doesn't automatically make it the customers 'fault'. A company can make something easier and easier for a customer to use "wrong", and eventually that's gonna fall on the company. 

 

It's a reasonable debate to have to decide where that line is in each situation, and what % is on the consumer in this particular case. I'm just saying in a general sense, I can come up with all kinds of scenarios, real and imagined, where a consumer can use something "wrong", and it still be gross negligence by the company for allowing it to happen so easily.

 

So if there's decades of history where every day Joe's can plug in computer components, and not start a fire, and suddenly there's a product where plugging it in not quite right can start a fire, I don't see how you point at the consumer. We're not talking about ignorant people trying to wire a fuse box.

 

I don't claim to know if the failure/burn % is out of whack compared to other home electronics... so that's the big question to me. Is this a greater than usual hazard? Or within reasonable limits and just blown out of proportion. That's what I'm curious about. But assuming this is a defect that's failing to an unacceptable degree, then I don't blame the consumer virtually at all.

Good points

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, BetteBalterZen said:

Good points

 

But as I said, at the same time, maybe this is blown out of proportion? I know Steve and Nvidia were able to put a % on the failure, but the number is kind of meaningless to me. It sounds small, but is it? Is it more dangerous than a random extension cord or a Christmas tree? I just don't know. I guess that's for the experts/authorities to figure out.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Holmes108 said:

 

But as I said, at the same time, maybe this is blown out of proportion? I know Steve and Nvidia were able to put a % on the failure, but the number is kind of meaningless to me. It sounds small, but is it? Is it more dangerous than a random extension cord or a Christmas tree? I just don't know.

Yup true true, I do think it is blown a bit out of proportion yeah

MOBO: ASUS ROG STRIX Z490-G GAMING (Wi-Fi) | CPU: Intel Core i7-10700K @5GHz + Corsair H100x | GPU: ASUS ROG STRIX RTX 3080 Ti OC LC @2.1GHz | RAM: G.SKILL TridentZ RGB @3700MHz, CL16-16-16-36 | SSD: Samsung EVO Plus 1TB, Samsung EVO Plus 1TB, Crucial MX500 2TB, Crucial MX300 1.05TB | PSU: Corsair HX1200i | CASE: HYTE Y60 (White/Black) MONITOR: Samsung Odyssey G9 @5120x1440p, 240Hz, Samsung CRG9 @5120x1440p, 120Hz

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, BetteBalterZen said:

Yup true true, I do think it is blown a bit out of proportion yeah

Could be blown out of proportion, but even one failure that results in a house fire coud be enough to kill people. Remember what a huge deal GN made about the NZXT case that cought fire? It was also a very small percentage of known failures afaik.

About monitor marketing BS

 

CPU: AMD Ryzen 5 5600X - Motherboard: ASUS ROG Strix B550-E - GPU: PNY RTX 3080 XLR8 Epic-X - RAM: 4x8GB (32GB) G.Skill TridentZ RGB 3600MHz CL16 - PSU: Corsair RMx (2018) 850W - Storage: 500 GB Corsair MP600 (Boot) + 2 TB Sabrent Rocket Q (Storage) - Cooling: EK, HW Labs & Alphacool custom loop - Case: Lian-Li PC O11 Dynamic - Fans: 6x Noctua NF-A12x25 chromax - AMP/DAC: FiiO K5 Pro - OS: Windows 10 Pro - Monitor: LG C2 OLED 42" - Mouse: Logitech G Pro - Keyboard: Logitech G915 TKL - Headphones: Beyerdynamic Amiron Home - Microphone: Antlion ModMic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×