Jump to content

AMD once again violating power specifications? (AMD RX-480)

Majestic

Oh shit, guys.

Let's go murder Nvidia now:

1467278576107.jpg

The GTX 960 hits over 225W on the PCIE bus!

The destroyer of all motherboards has lived among us all along. 

 

Oh wait. Let's be real for a moment: this is a problem exclusively with the Asus card. No other GTX 960 does that. But it consistently hovers around 75W (and exceeding it while frequently reaching 150W) and spikes over 225W. And I haven't heard of fried motherboards from that.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, mariushm said:

@Sintezza : a very simple method would be to use pci-e x16 riser cards / extenders, here's an example of such cable: https://www.amazon.com/Express-Riser-Extender-Flexible-Extension/dp/B008BZBFTG

 

pcie riser.jpg

 

You have the pci express slot pinout here: https://en.wikipedia.org/wiki/PCI_Express#Pinout

Basically 5 out of the first 6 pins in the slot are 12v , so they can simply make a cut in those wires and connect a measurement tool (to log the current and voltage) between those pins of the riser cable and wires going into the motherboard slot (or they could just work around the motherboard and connect those pins to the power supply but then the measurements wouldn't be fair and they wouldn't notice if the current consumption affects the motherboard in some way, let's say for example if they want to measure the temperature of the motherboard surface for hot spots).

The equipment they use is quite high end and capable of measuring the instantaneous voltage and current at least a few hundred times a second, which gives them the ability to show when the card pulls much more current than normal for very brief moments.

 

Of course, just by introducing some equipment between video card and motherboard/power supply introduces some power losses but with good quality equipment these can be ignored (the deviations/errors are usually less than 0.05% from real values so maybe a few tens of a watt error)

 

Yes that would be a very good test method. :).

But without going to deeply into this, i cannot imagine that they have not test any of this before the card leaves the factory.

I do have to say, that i´m personaly very skeptical about the single 6 pin design aswell, knowing that its basicly a 150W tdp card.

Custom designs will probably come with 8 pin designs.

But still i cannot image that any of this has not been tested what so ever.

 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Prysin said:

its spikes, not constant load.

Spiking every 40us is nothing to sneeze at.

 

38 minutes ago, Trixanity said:

That remains to be seen. They might have to convince someone to do it pro-bono or put some of that leftover Intel money toward lawyers.

Can't we petition the admins to give him that title? That would be great.

I provide sources more often than Lawlz and Prysin, so it's not really fitting.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Trixanity said:

 

That feels dishonest of Tom's hardware to state it like that since it happened before with a few cards apparently.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

Spiking every 40us is nothing to sneeze at.

 

I provide sources more often than Lawlz and Prysin, so it's not really fitting.

Just banter, man. 

 

No one provides sources the vast majority of the time - myself included. But if a source is needed or requested, it's only right to provide it or explicitly state you cannot provide one or cannot find it anymore. I try myself to do that when confronted with the lack of a source. I usually don't provide one up front if I don't have it immediately available because I often don't wanna spend time looking for it. Why? Sometimes the sources can be a bit obscure or based off experience/knowledge/memory that you then have to find a reliable source to backup the aforementioned.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, patrickjp93 said:

Spiking every 40us is nothing to sneeze at.

 

I provide sources more often than Lawlz and Prysin, so it's not really fitting.

To qualify I said it because you get singled out by said requests

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Trixanity said:

Oh shit, guys.

Let's go murder Nvidia now:

The GTX 960 hits over 225W on the PCIE bus!

The destroyer of all motherboards has lived among us all along. 

 

Oh wait. Let's be real for a moment: this is a problem exclusively with the Asus card. No other GTX 960 does that. But it consistently hovers around 75W (and exceeding it while frequently reaching 150W) and spikes over 225W. And I haven't heard of fried motherboards from that.

You are, just like everyone else, misrepresenting the argument. You guys left so much straw all over the place, it's hard to find any arguments you're making.

960 does have quite a fair bit of spiking, but it doesn't exceed 75W average, it sits at 50.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, zMeul said:

"swapping MOSFETs" isn't the solution since the power delivery circuitry needs a complete overhaul to include a PEG power limiter - clearly RX480 lacks it and load balances the power draw between the 6pin PCIe and PEG

 

maybe this should be a wake-up call fro mobo manufacturers to include overpower protection for the PEGs

Yes its not super simple but adding an 8 pin and working the power delivery which is normally done on non-refrence cards anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

Also for reference the GTX 750Ti can draw upto 140W through the PCIe sockets at reference clocks. It has no external 6pin power connector either.

 

Also PCIe 2 and version 3 sockets can allow up to 300Ws as a theoretical max to be pulled through them. 75W is just the default until set higher.

ghf_zpsnqb6qoqy.png

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Valentyn said:

Also for reference the GTX 750Ti can draw upto 140W through the PCIe sockets at reference clocks. It has no external 6pin power connector either.

ghf_zpsnqb6qoqy.png

The problem is not the spikes on the 480 its the fact that it consistently uses more than 75 watts. Even your examples puts it at a 64 watt average which is perfectly fine.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Hunter259 said:

The problem is not the spikes on the 480 its the fact that it consistently uses more than 75 watts. Even your examples puts it at a 64 watt average which is perfectly fine.

Also 75W is nothing, the PCIE standard allows for a maximum of up to 300W. 75W is the default minimum.


http://composter.com.ua/documents/PCI_Express_Base_Specification_Revision_3.0.pdf

 

jRvugpm.jpg

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone interesting in this topic or wants to learn anything about the PCI-E specifications might want to go to this thread and read the information in there and check out the attached links

 

https://www.reddit.com/r/Amd/comments/4qmlep/rx_480_powergate_problem_has_a_solution/

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Valentyn said:

Also 75W is nothing, the PCIE standard allows for a maximum of up to 300W. 75W is the default minimum.


http://composter.com.ua/documents/PCI_Express_Base_Specification_Revision_3.0.pdf

 

jRvugpm.jpg

Just because you have an absolute max doesn't mean you should use more than specified.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mariushm said:

But we're talking stupid cheap or badly designed motherboards, boards that would normally be paired with under $100 video cards.

[Citation Needed] that it only happens on "stupidly cheap" motherboards that would generally be paired with cards under 100 dollars.

 

 

 

2 hours ago, Prysin said:

But the 1080 is Nvidia. So naturally it's not a problem when they break the "standards"

Oh for crying out loud. Can you please stop sucking AMD's penis for one second and look at things objectively?

One card goes over the spec constantly, on something that is generally not built to go far and above the specs.

The other goes over the spec very briefly but overall stays well below it, on something that is usually built (at least trustworthy brands) to support more than it is rated for.

 

See the difference?

 

 

 

2 hours ago, spartaman64 said:

also the rx 480 had to pass the PCI-SIG's own testing to be released to market and apparently it did so both amd and PCI-SIG messed up their testing?

*Puts on tin-foil hat*

It might also be that AMD did the dirty trick a lot of cheap PSU manufacturers does. The unit sent it for testing is not the same as the pass-produced product.

The certification process is done on one copy of the product sent in by the manufacturer, not on every mass-produced example.

 

The more likely scenario is that PCI-SIG didn't test it for power consumption. Like @zMeul pointed out in one of his posts, AMD broke the standard back with the 6990 and as it turned out back then, the card hadn't even been tested.

 

 

1 hour ago, Sintezza said:

Sure but did they tested and had the same issue?

Or do they just copy a story.

They tested it and had the same issue. One of them even went out, bought another card from a store and tested that, and got the same results.

1 hour ago, Sintezza said:

Its definitely not impossible that the cards power overshoot has killed that pci-e slot.

The card has only been out for a day. It's not like it instantly explodes as soon as your turn your PC on. It can however cause issues like audio issues when using onboard audio, increased wear on other components if they don't get enough power, and things such as stuttering in games and programs if the motherboard fails to deliver more power than it was designed for (yet the 480 expects it to do).

1 hour ago, Sintezza said:

Or there actualy is a serious issue with the power delivery to those cards.

If we do hear more about this, then it will be realy bad for AMD.

If we never hear from it again, then it probably is a non issue.

We are already hearing other publications getting the same results. It's just that you are putting your fingers in your ears and going "lalalala I can't hear you", and you are implying that some sites are lying.

 

 

56 minutes ago, Dackzy said:

How did this go so far? I mean come on they can fix this with a BIOS update, it really is that simple.

We don't know if it can be fixed by a BIOS update. It might be possible, and it might not.

 

 

 

40 minutes ago, Sintezza said:

But those cards are getting fully tested to meat pci-e and other standard before they left the factory.

We don't know if PCI-SIG actually tests the power consumption over the PCIe slots. They didn't do it in 2011, despite the specifications being older than that.

 

 

 

20 minutes ago, patrickjp93 said:

I provide sources more often than Lawlz and Prysin, so it's not really fitting.

[Citation Needed] because I find that VERY hard to believe. :P

 

 

 

2 minutes ago, Valentyn said:

Also 75W is nothing, the PCIE standard allows for a maximum of up to 300W. 75W is the default minimum.


http://composter.com.ua/documents/PCI_Express_Base_Specification_Revision_3.0.pdf

 

jRvugpm.jpg

No no no no... That's the total limit including any PCIe power cables from the PSU. The keyword in that text is "power supplied by the slot or by other means to the adapter".

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Hunter259 said:

Just because you have an absolute max doesn't mean you should use more than specified.

Yet it's somehow fine for NVIDIA cards to constantly spike over that limit, one of them relying solely on the connector for power and nearly drawing double of what it's recommended to.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Majestic said:

You are, just like everyone else, misrepresenting the argument. You guys left so much straw all over the place, it's hard to find any arguments you're making.

960 does have quite a fair bit of spiking, but it doesn't exceed 75W, it sits at 50.

Look at the graph. The blue is below 75W, the red is above. It's above 75W all the time. Stop twisting it to suit your agenda. There is no straw there. It's quite clear. It's pretty much at or above 75W consistently; as in average. It frequently spikes to 150W-225W. It's the same scenario.

 

I'm actually disappointed. For a brief period of time, I actually thought you were somewhat neutral in your representation - and I respected that. That thought has evaporated. You're presented with a graph showing a strikingly similar problem and you dismiss it and even pointing to the graph and saying the results are different than they actually are. 

 

The first 10 seconds of the graph pretty much never sees it below 75W but at or above 75W for seconds at a time. The graph isn't very granular but to me it looks like 80-90W for at least 3 seconds at a time. Spikes are not 3 seconds if this discussion is to be believed, that's a real danger. It's only after about 40 seconds, it reaches something like 50W briefly, so the first 40 seconds don't matter I guess because they don't suit your needs.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Valentyn said:

Also 75W is nothing, the PCIE standard allows for a maximum of up to 300W. 75W is the default minimum.

you are off your rocker if you actually believe a mobo traces that are under a mm wide and micro meter thin can provide 300W trough them

 

case in point: put a 3A+ load on a 1A FAN header and see what happens

and you want to put 300W trough them?!!?!?!? :o

 

and @LAwLz is right, by other means

my own X38-DS5 mobo has an extra MOLEX adapter to provide PEG slots with extra power when CFX is used

 

ds5overview1.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, mariushm said:

Problem is the ATX 24 pin connector is kinda old, and kept the same for compatibility reasons, and there's only two 12v wires in the connector, so in total the motherboard has only something like 15 A of current on 12v, excluding the CPU power connectors.

 

2005 NEC shows 18amp continuous rating in free air (<86f) for 18AWG insulated copper, lets say 15amps to be safe, but that is per wire. So 15amps*2wires*12V=360W for PCIe sockets/fans/HDD. Is drawing even double what the rx480 from those pins actually an issue, I mean we are talking about just a few amps on each connector here. CPU power delivery draws from the auxiliary 4/8 pin, so this doesn't need to be factored in, right?

 

Also, that picture of the PCIe riser you showed is driving me nuts! If they are drawing ~100W @ 12V through 5 of those little wires, what is the voltage drop!? Those do not look like they can support 1-2amps without significant voltage loss over the length. You can have all the fancy oscilloscopes in the world, but if their little riser cable is causing the three phases powered from the slot to be only getting 11V input or something (made that up obviously) than they are allowing the circuitry to draw more current than would be seen if the card was installed straight into the slot for any given power draw. I'd love to see voltages measured at the card between the two phase banks if this is how they tested it.

 

Sure their testing equipment itself might be insignificant in how much loss there is, but if they actually ran 100W continuous through 5-1ft long wires the diameter of a hair, I don't see how they could avoid a voltage drop thus creating non real world current draw conditions.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Valentyn said:

Yet it's somehow fine for NVIDIA cards to constantly spike over that limit, one of them relying solely on the connector for power and nearly drawing double of what it's recommended to.

*FacePalm* Spikes are fine if they are short, like those are. Constantly being above it is a problem.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, raphidy said:

Yellow line is average.

I'm referring to the graph for the pcie bus. I think the one you're looking at is the one for the auxiliary connector. If it's not just the aux connector, their nomenclature is bad and should feel bad.

 

And besides, these problems are apparently very easily fixed through software and I do not mean throttling. A simple change to how the card draws power and imposing limits. But of course, that doesn't suit the agenda.

 

Either way:

The power problem can be fixed and will be fixed. It's just a matter of waiting for AMD to issue one. And I'm guessing the 960 problem has long since been fixed.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, KeltonDSMer said:

 

2005 NEC shows 18amp continuous rating in free air (<86f) for 18AWG insulated copper, lets say 15amps to be safe, but that is per wire. So 15amps*2wires*12V=360W for PCIe sockets/fans/HDD. Is drawing even double what the rx480 from those pins actually an issue, I mean we are talking about just a few amps on each connector here. CPU power delivery draws from the auxiliary 4/8 pin, so this doesn't need to be factored in, right?

 

 

The current value you mention is for ONE cable loose in the air, with air around it to keep the temperature of the insulation below the maximum recommended values. The maximum recommended current value is lower when the wire is in a bundle as it often happens with the 24 pin power supply cable, because some wires may be inside the bundle surrounded by other heat generating wires (no air around insulation for natural cooling). Ribbon cables are slightly better for current handling because of this, more area for each wire to stay cool.

 

Then, you have the limitations of the connectors. PCI-E connectors and the motherboard connector are variations of Molex MiniFit Jr. , you can download the datasheets and other reference from here: http://www.molex.com/molex/products/family?key=minifit_jr&channel=products&chanName=family&pageTitle=Introduction

Here's a more direct link: http://www.literature.molex.com/SQLImages/kelmscott/Molex/PDF_Images/987650-0212.PDF

As you can see, the rating is 9A per contact with about 10mOhm contact resistance so for a 6pin pci-e connector, it's not a good idea to go more than 3 x 9A x 12v = 324 watts

 

There's a lot of safety and redundancy built into computers, and some thinking ahead. That's why you have three pairs of wires into a pci-e 6pin connector rated for maximum 75w when a single pair of wires could actually carry those 75 watts. 3 pairs add redundancy, lower the voltage drop between power supply and card, less current through each pair means each pair heats less so its safer...

 

Quote

 


Also, that picture of the PCIe riser you showed is driving me nuts! If they are drawing ~100W @ 12V through 5 of those little wires, what is the voltage drop!? Those do not look like they can support 1-2amps without significant voltage loss over the length.
 

 

 

SO TRUE !  I showed that picture just as example of riser cable/card. There are such extensions that don't use wires at all, they're just like half-height pci express cards with thick traces on the pcb and locations where you can interrupt the voltage traces and solder or connect something in-between for measurement.

 

Obviously the voltage drop through those thin and long wires of the riser card would be big, but it could still be used by a home user simply by replacing those thin wires with some thicker wires.  The long data wires shouldn't really be much of an issue.

 

--

 

imho everyone should just get on with it already and kill the 24 pin power connector and make up a new atx version and implement 20v +/- 5% and make computer power supplies with 20v to power cpu and new gpu cards, 12v for compatibility with older cards and fans and 5v for SSD drives and usb 2.0 and be done with it. 3.3v can be generated on the motherboard from 20v or 12v and you don't have to bother with voltage drops and remote sensing and other crap.

USB 3 is specified for 5v , 12v and 20v at up to 5A , the voltage can be chosen by remote device on request but all usb hardware these days still works only with 5v. 

With 20v +/- 5% in, it would make it easy for companies to make itx boards with DC In connector and powered from classic 19v laptop adapters, or power cpu and everything else with fewer wires. Higher voltage means less current, so thinner or fewer wires required, smaller connectors, everyone's happy.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Trixanity said:

Snip

I agree that it can be fix. I don't know if it's going to be easy. 

Your graph of the 960, found the source, it's more about strix fucking up than Nvidia. There's even an example of the Gainward under it.

 

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

Spiking every 40us is nothing to sneeze at.

 

I provide sources more often than Lawlz and Prysin, so it's not really fitting.

I think you should try review your statistics there. You may find yourself coming up a few hundred links short.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×