Jump to content

Amazon New world Killing GPU's yet again

darknessblade
27 minutes ago, ravenshrike said:

Amazon has managed quite the accomplishment, they've created a game that works like a power virus using a modified Crytek engine. The fact of the matter is the 'faulty graphics cards' are only failing due to the complete incompetence of Amazon programmers creating conditions that no one who wanted good graphics or any other real world GPU intensive application would ever do because it produces no useful output.

Nah.

 

It makes perfect sense.

 

The 3090 and 3080 are the same die. The VRM's are capable of only putting out X watts.

So somewhere along the line, some of the AIB's, kowtowing to overclockability, or selling them factory-OC'd didn't QA test the cards "maxed out", because in all likeliness those conditions can not be replicated with benchmarks.

 

Which is a thing that is repeated, frequently, with benchmarks. Synthetic benchmarks are not representative of real world usage.

 

So, what is killing the cards? The GUI? Seems sus.

 

Now, if you've done any gamedev at all, you'd know that everyone is using "dear imgui" for what is supposed to be mostly debug stuff. the "IM" stands for "Immediate Mode", as it builds the GUI widgets every frame. Now, if you've used any program that lets you activate imgui, you'd see this exact effect, where the GUI somehow min-maxes the GPU, despite not actually doing anything. Because again, without a frame limiter, the imgui is building the UI, every frame, as it writes asynchronous from the game's 3D rendering, it's built as a layer on top (the same way Rivatuner does) , and depending where the developer put IMGUI in the rendering pipeline, it can either be zero-latency, while running at the maximum framerate the GPU can achieve, or it can be lagged a few frames (as used in bgfx) as it goes through the game engine.

 

So this comes right back to the ability of the software to min-max the card in a way that maximizes the power draw rather than the rendering capacity. Benchmarks do the latter. Overclocked, especially factory overclocked, cards will always fail first, and historically, it's never been worth paying the extra $100 for the factory OC model of a card, as inevitably a game will come out a year after the warranty expires that will make the card fail. Every time this happens. FFXIV DX10/11 mode came out? cards explode. Neir Automata came out? Cards explode. Users go "game is poorly optimized", when in fact, their cards are broken.

 

It should not be possible to kill GPU cards with software unless the software quite literately is taking advantage of the lack of limiters, which doesn't seem to be the case. Rather the blame here lies on nVidia and/or the AIB's for allowing the cards to pull unlimited power, or more power than the VRM's or the chip itself can support. If software was the case, every single user with a 3080, or 3090, or even AMD's high end chips would have their GPU's halt-and catch-fire because the highest end cards have the highest power draw. It should also be taking out every factory-OC GPU.

 

But not everyone is running the game with vsync off, and it's likely the only thing saving some GPU's, is having global vsync on or global frame limiting on for other reasons.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/30/2021 at 4:19 AM, Blademaster91 said:

New World pushing cards too much as there isn't a frame limiter.

There is a 30 & 60 fps cap in game menu, though it seems to not be enough as you can see variations between 60-64 (In main menu & in game).

 

I own an EVGA 3090 FTW3 bought november 2020 with a serial number that seems to be in the faulty solder batch.

 

First crash was after like 9 to 10 hours of playtime ? It didn't destroyed my GPU but completely bricked my driver locking my fps down to 20-22, had to DDU & reinstall drivers to restore the default behaviour of the card.

Also steam didn't recognize the path to the game anymore, had to refresh its memory by clicking install.

 

Anyway, I capped my frame to 60 in game, 60 via RTSS, power target at 80%, no OC, etc.

The game black screened & fans 100% again after 10-15min of playtime.

 

Not gonna try a third time, but this behaviour is fairly odd considering I played other games that ask a LOT of resources and never had an issue.

Hell even Cyberpunk 2077@1440p full settings and it was butter smooth.

 

EVGA 3090 FTW3 Stock speed

i7 8700k@4.3GHz

ROG STRIX Z370-H GAMING

W10 19043

Nvidia drivers 472.12

Lian Li O11D XL with 6 intakes so the airflow is more than correct

 

I will wait it out & see if it's on the manufacturer or sofware side.

Would be nice to see a word from either the developer or the GPU manufacturers reaching out to people saying "Hey you have a 30 series card, you might want to chill a bit while we figure out what is happening", but hey. It's not like it would be a huge useless pile of waste letting people brick their GPUs & sending them RMAs, right?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sworda said:

I will wait it out & see if it's on the manufacturer or sofware side.

Would be nice to see a word from either the developer or the GPU manufacturers reaching out to people saying "Hey you have a 30 series card, you might want to chill a bit while we figure out what is happening", but hey. It's not like it would be a huge useless pile of waste letting people brick their GPUs & sending them RMAs, right?

Is your GPU still working fine? Do you plan to RMA it?

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, AndreiArgeanu said:

Is your GPU still working fine? Do you plan to RMA it?

My GPU seems to be fine, though it prevents me from playing a game I purchased which sucks.

 

What I would like to know from EVGA is, if they could reach out to people that have botched cards to trade with a fixed one..

I'm waiting for an answer on Monday since their support is off the weekend.

We will surely know more this week as people poke around more & find out new things.
In the meantime, no New World 😄

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

Nah.

 

It makes perfect sense.

 

The 3090 and 3080 are the same die. The VRM's are capable of only putting out X watts.

So somewhere along the line, some of the AIB's, kowtowing to overclockability, or selling them factory-OC'd didn't QA test the cards "maxed out", because in all likeliness those conditions can not be replicated with benchmarks.

 

Which is a thing that is repeated, frequently, with benchmarks. Synthetic benchmarks are not representative of real world usage.

 

So, what is killing the cards? The GUI? Seems sus.

 

Now, if you've done any gamedev at all, you'd know that everyone is using "dear imgui" for what is supposed to be mostly debug stuff. the "IM" stands for "Immediate Mode", as it builds the GUI widgets every frame. Now, if you've used any program that lets you activate imgui, you'd see this exact effect, where the GUI somehow min-maxes the GPU, despite not actually doing anything. Because again, without a frame limiter, the imgui is building the UI, every frame, as it writes asynchronous from the game's 3D rendering, it's built as a layer on top (the same way Rivatuner does) , and depending where the developer put IMGUI in the rendering pipeline, it can either be zero-latency, while running at the maximum framerate the GPU can achieve, or it can be lagged a few frames (as used in bgfx) as it goes through the game engine.

 

So this comes right back to the ability of the software to min-max the card in a way that maximizes the power draw rather than the rendering capacity. Benchmarks do the latter. Overclocked, especially factory overclocked, cards will always fail first, and historically, it's never been worth paying the extra $100 for the factory OC model of a card, as inevitably a game will come out a year after the warranty expires that will make the card fail. Every time this happens. FFXIV DX10/11 mode came out? cards explode. Neir Automata came out? Cards explode. Users go "game is poorly optimized", when in fact, their cards are broken.

 

It should not be possible to kill GPU cards with software unless the software quite literately is taking advantage of the lack of limiters, which doesn't seem to be the case. Rather the blame here lies on nVidia and/or the AIB's for allowing the cards to pull unlimited power, or more power than the VRM's or the chip itself can support. If software was the case, every single user with a 3080, or 3090, or even AMD's high end chips would have their GPU's halt-and catch-fire because the highest end cards have the highest power draw. It should also be taking out every factory-OC GPU.

 

But not everyone is running the game with vsync off, and it's likely the only thing saving some GPU's, is having global vsync on or global frame limiting on for other reasons.

 

You can do this in two "games" already.

 

1) Path of Exile with GI/Shadows quality=Ultra (GI Quality=high makes it behave more like a normal game).

2) GPU-Z fullscreen graphics test.

3) Metro:Exodus main menu is inbetween these two (this behaves more like a normal game), this one hits Normalized limits at about 570W  POE is the worst offender on GI:Ultra.

 

POE will trigger a TDP Normalized power throttle on the NVVDD and MSVDD rails even if you are nowhere close to the TDP (power limit) on cards that can run at >500W TDP or shunt mods.  Even at 400W TDP, you can see the clocks drop by -100 mhz at the same 400W vs GI/Shadows = High at 400W, but then you're hitting regular TDP, not normalized on the NVVDD voltage power rails.

 

GPU-Z will not do this but will allow more than 550W and won't throttle at all on TDP Normalized, but will throttle on TDP% if your cards' power limit is set too low, as it puts a very low load on rasterization (if your card can exceed 550W), so sort of the opposite of POE.  But GPU-Z puts a higher load on the 8 pin rails (they will have worse power balance than if you were testing with POE GI:Ultra)..

 

Anyone who can pass both of these at 550W should have no problem with New World.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ravenshrike said:

Amazon has managed quite the accomplishment, they've created a game that works like a power virus using a modified Crytek engine. The fact of the matter is the 'faulty graphics cards' are only failing due to the complete incompetence of Amazon programmers...

When it comes to the question whether a software can brick a GPU hardware or not, so far everyone is only talking about games and what game code does to the GPU.

And I would like to point out some things here:

 

-First: These cards are being designed by nvidia not only to handle "game-produced power profiles".

They are also being designed and certified to handle GPGPU applications which can ask the driver to execute arbitrary compute kernels through openCL or CUDA - meaning not only any combination of computations that is possible through DirectX/openGL but much-much-much more and "not game-like" ones.

 

-Second: By the very way how GPUs work all of the direct instructions that reach the the GPU are produced (compiled on-the-fly) by the nvidia driver during the software's runtime, from the openGL/Directx/openCL/CUDA function calls embedded in the software. No user mode software can give direct - thus potentially out-of-spec - instructions to a GPU.

 

5 hours ago, ravenshrike said:

...creating conditions that no one who wanted good graphics or any other real world GPU intensive application would ever do because it produces no useful output

Maybe Amazon is incompetent to create well working (power efficient, less demanding) graphics and the power profile that New World produces is uniquely bad among games.

But this does not mean that it should ever be outside the specification of what the GPU is designed to handle. Because, again, it's design is not for "well-programmed-game-produced power profiles".

Is should handle any task that the driver can possibly dispatch even from unique, arbitrary compute kernels. There are many real world software out there (eg. openCL/CUDA accelerated image filters), which do dispatch wild compute kernels no one seen before - for the very reason to produce useful unique output that no one wanted before. And Nvidia does market the consumer GPUs for running those too.

 

So this should not brick a GPU which is well made - meaning the design, the manufacturing, the vBIOS and the driver.

Nvidia and the GPU manufactures are the ones who have to fix this problem because sooner or later (eg. with the inclusion of GPU-accelerated machine learning in consumer apps) other applications will come out having novel GPU power profiles - and potentially trigger this issue.

 

May you all blame some game devs for having twice the electricity bill then necessary because of wasteful GPU computations, but

People, please stop blaming any game devs for GPUs getting actually bricked!

         \   ^__^ 
          \  (oo)\_______
             (__)\       )\/\
Link to comment
Share on other sites

Link to post
Share on other sites

If I cap my fps to 60, activate v-sync and drop the power limit % should I be fine or even with that you would not touch the game at all?

(Sorry I am french)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Kisai said:

Nah.

 

It makes perfect sense.

 

The 3090 and 3080 are the same die. The VRM's are capable of only putting out X watts.

So somewhere along the line, some of the AIB's, kowtowing to overclockability, or selling them factory-OC'd didn't QA test the cards "maxed out", because in all likeliness those conditions can not be replicated with benchmarks.

 

Which is a thing that is repeated, frequently, with benchmarks. Synthetic benchmarks are not representative of real world usage.

 

So, what is killing the cards? The GUI? Seems sus.

 

Now, if you've done any gamedev at all, you'd know that everyone is using "dear imgui" for what is supposed to be mostly debug stuff. the "IM" stands for "Immediate Mode", as it builds the GUI widgets every frame. Now, if you've used any program that lets you activate imgui, you'd see this exact effect, where the GUI somehow min-maxes the GPU, despite not actually doing anything. Because again, without a frame limiter, the imgui is building the UI, every frame, as it writes asynchronous from the game's 3D rendering, it's built as a layer on top (the same way Rivatuner does) , and depending where the developer put IMGUI in the rendering pipeline, it can either be zero-latency, while running at the maximum framerate the GPU can achieve, or it can be lagged a few frames (as used in bgfx) as it goes through the game engine.

 

So this comes right back to the ability of the software to min-max the card in a way that maximizes the power draw rather than the rendering capacity. Benchmarks do the latter. Overclocked, especially factory overclocked, cards will always fail first, and historically, it's never been worth paying the extra $100 for the factory OC model of a card, as inevitably a game will come out a year after the warranty expires that will make the card fail. Every time this happens. FFXIV DX10/11 mode came out? cards explode. Neir Automata came out? Cards explode. Users go "game is poorly optimized", when in fact, their cards are broken.

 

It should not be possible to kill GPU cards with software unless the software quite literately is taking advantage of the lack of limiters, which doesn't seem to be the case. Rather the blame here lies on nVidia and/or the AIB's for allowing the cards to pull unlimited power, or more power than the VRM's or the chip itself can support. If software was the case, every single user with a 3080, or 3090, or even AMD's high end chips would have their GPU's halt-and catch-fire because the highest end cards have the highest power draw. It should also be taking out every factory-OC GPU.

 

But not everyone is running the game with vsync off, and it's likely the only thing saving some GPU's, is having global vsync on or global frame limiting on for other reasons.

 

If I cap my fps to 60, activate v-sync and drop the power limit % should I be fine or even with that you would not touch the game at all?

(Sorry I am french)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, AleckMaster said:

If I cap my fps to 60, activate v-sync and drop the power limit % should I be fine or even with that you would not touch the game at all?

check your temps and don't do very high/ultimate settings if it causes a lot of heat.

60fps maybe set through Nvidia panel too. You can "touch the game" and there are people playing it, just be aware of the heat problem.

Also depends on what system you have.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Quackers101 said:

check your temps and don't do very high/ultimate settings if it causes a lot of heat.

60fps maybe set through Nvidia panel too. You can "touch the game" and there are people playing it, just be aware of the heat problem.

Also depends on what system you have.

Gpu: Evga FTW3 rtx 3080

(was at 71 degrees celcius when fps uncapped and max setting in the game)

Cpu: R9 3900x

Ram: 32 gb 3600 (2x16gb)

Psu: 850 gold+ Seasonic

 

I heard that the problem could be that the game draws more power than the limit too often even if the temps are good and the fps are capped. I am not a pro in this. I don't want to loose my GPU. I love the game sadly...

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/29/2021 at 6:34 PM, Mel0nMan said:

I'm going to download New World to play it on my testing GPU (FirePro V3800). If there are no flames and sparks I will be very disappointed. 

So it's not a FirePro then... 

CPU: AMD Ryzen 5 5600X | CPU Cooler: Stock AMD Cooler | Motherboard: Asus ROG STRIX B550-F GAMING (WI-FI) | RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3000 CL16 | GPU: Nvidia GTX 1060 6GB Zotac Mini | Case: K280 Case | PSU: Cooler Master B600 Power supply | SSD: 1TB  | HDDs: 1x 250GB & 1x 1TB WD Blue | Monitors: 24" Acer S240HLBID + 24" Samsung  | OS: Win 10 Pro

 

Audio: Behringer Q802USB Xenyx 8 Input Mixer |  U-PHORIA UMC204HD | Behringer XM8500 Dynamic Cardioid Vocal Microphone | Sound Blaster Audigy Fx PCI-E card.

 

Home Lab:  Lenovo ThinkCenter M82 ESXi 6.7 | Lenovo M93 Tiny Exchange 2019 | TP-LINK TL-SG1024D 24-Port Gigabit | Cisco ASA 5506 firewall  | Cisco Catalyst 3750 Gigabit Switch | Cisco 2960C-LL | HP MicroServer G8 NAS | Custom built SCCM Server.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I have an undervolted RTX 2070 Super, all of the time my temps are in the low 60s while playing with the fps capped to 60( its actually 62 in game for some reason), am i in the clear here?

 

EDIT: As far as i read, the problem consists of the game sucking more Power than the card TDP which is weird, but undervolting the card should eliminate that right? I really think that it's a valid alternative for people with 30 series card that want to play the game but are afraid.(Someone brave should test it first haha), i know that the probability of the game bricking my gpu is low since it happens on high wattage cards and mine only pulls about 215W, so im probably on the safe side even without my custom fan profile, undervolt and fps cap

Intel Core i5-12600K            Gigabyte Z690 Gaming X            XPG Gammix  D10 (4x8GB) DDR4-3200Mhz

ADATA Falcon 512GB            Nvme  XPG Gammix S41 TUF 256GB Nvme            WD Green 480GB Nvme

Zotac Geforce RTX 2070 Super            XPG Core Reactor 850W            Seagate HD 1TB 7200rpm                       

 Noctua NH-D15            AOC 24G2/BK           Corsair 4000D            Corsair K68            Logitech G502 Hero           

Link to comment
Share on other sites

Link to post
Share on other sites

Looks at 6800xt and 3900x 

 

6800xt never goes over 65c and the 3900x also around 60c 

 

To be fair, on new world I can see some spikes to 360w on the gpu, 

 

What people need to do on amd cards for example on this game is just turn on chill and set it for the range of your monitor I have mine from 60 to 144 hz. The game is really well optimized for the amount of people that it has and how it looks. I have 120 fps most of the time and on really populated cities is around 80 . 

 

In this day and age all cards need to have OCP and counters to a power virus , also if some areas of the cards are getting to hot it needs to shut down, for example vrm memory etc. 

 

If its burning more 3000 series then any other series its because there is a flaw somewere and its creating this issue. 

 

Now Im really carefull with the case and ventilation not to have hot spots and always keep the gpu fed with fresh air . Must of the people wont even care and that can lead to issues 

Link to comment
Share on other sites

Link to post
Share on other sites

Why do so many people run games unlimited? In games like new world there is literally no point in running it at hundreds of fps either way.

 

I just limit my games to my monitor's max refresh rate, which is the correct way to do if you want to use VRR correctly.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Stahlmann said:

Why do so many people run games unlimited? In games like new world there is literally no point in running it at hundreds of fps either way.

because bigger fps = big E-p3n0r

 

vsync is on 100% of the time for me.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Arika S said:

vsync is on 100% of the time for me.

And as long as you just use "global" v-sync (in the GPU control panel) then you can set it once and never worry about it again.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

How I love crap built GPUs burned in the morning. 

 

Now really the fault is not the game. Its a heavy game that is well optimized, you get 120fps on a 6800xt maxed at 1440p and the game looks good really good, gameplay is good. 

 

Now for AMD cards I do recomend to use CHILL, there is no reason not to use it . 

 

For Nvidia cards well just limit your fps of you think the temps are to much and the fans are at 100% 

 

In my case I have a 6800xt 60c normally on this game at 1440p 

 

If a card is at 350w + and going over that for example a 3090 OCP needs to trigger, maybe a warning on the drivers, hey OCP just trigger WARNING. Or your VRM or memory is getting to hot . 

 

Use HWINFO64 its helps loads controling themps and puting warnings , Memory, VRM and hotspot 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/29/2021 at 9:19 PM, Blademaster91 said:

This isn't exclusive to EVGA cards though, according to the reddit thread the GPU was a Gigabyte 3080Ti. I also remember Jayztwocents saying it affects AMD cards too. I think the blame is both on manufacturers possibly using cheaper components, and New World pushing cards too much as there isn't a frame limiter.

IIRC buildzoid came to the conclusion that it was manufacturers not using the protections on the voltage controllers allowing the cards to draw basically unlimited power that killed the gpus

Link to comment
Share on other sites

Link to post
Share on other sites

yeah, for some reason new world doesn't quite allow you to limit your FPS. but you can be sort of able to soft cap it.

As it can balance around 60-70 while trying to be locked at 60, not sure about what Jayztwocents talk about the increase of power draw from New World (some being way more, and that is just in the intro screen/menu). where lowering the POWER cap to 50% can sometimes make the card go around 70% in new world instead of 100%++

Link to comment
Share on other sites

Link to post
Share on other sites

 Amazon: Don't Blame New World for GPU Deaths, Blame Card Makers

 

Summary

It's not the games fault but a result of bad GPU manufacturing according to amazon

 

Quotes

Quote

News outlet hardwareluxx has received an official statement from Amazon regarding GPU deaths surrounding its recently released MMO New World. Amazon says they have detected no game-breaking bugs in New World that would cause the game to kill RTX GPUs. Instead, it reiterates and confirms that it's a problem with GPU manufacturers and poor graphics card build quality.

"In the last few days, we have received few reports from players who have had problems with their GeForce RTX cards. After extensive investigation, we were unable to identify any unusual behavior on the part of New World that could be the cause of these problems. EVGA has already confirmed errors in the production of some GeForce RTX cards. New World can be played safely. For players who have encountered a hardware failure, we recommend that you contact the manufacturer." —Amazon

 

My thoughts

Well, we have another controversial statement here, we have. So, according to amazon, this is the fault of AIBs. What bugs however is how this is basically the only game that has so far killed GPUs. But imo, this is the result of errs of both parties, not just one another. And imo, they should probably stop pointing hands on each other and just start working on a solution on both sides. But we'll have to wait and see, for the time being avoid playing this game tho.

Sources

Tom's hardwaree

Hardwareluxx

Edited by LogicalDrm

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, decolon said:

IIRC buildzoid came to the conclusion that it was manufacturers not using the protections on the voltage controllers allowing the cards to draw basically unlimited power that killed the gpus

Interesting, I didn't know that. Did the RTX 2000 series have any protections on the voltage controllers?

Link to comment
Share on other sites

Link to post
Share on other sites

It's not even a good game, people

Work Rigs - 2015 15" MBP | 2019 15" MBP | 2021 16" M1 Max MBP | Lenovo ThinkPad T490 |

 

AMD Ryzen 9 5900X  |  MSI B550 Gaming Plus  |  64GB G.SKILL 3200 CL16 4x8GB |  AMD Reference RX 6800  |  WD Black SN750 1TB NVMe  |  Corsair RM750  |  Corsair H115i RGB Pro XT  |  Corsair 4000D  |  Dell S2721DGF  |
 

Fun Rig - AMD Ryzen 5 5600X  |  MSI B550 Tomahawk  |  32GB G.SKILL 3600 CL16 4x8GB |  AMD Reference 6800XT  | Creative Sound Blaster Z  |  WD Black SN850 500GB NVMe  |  WD Black SN750 2TB NVMe  |  WD Blue 1TB SATA SSD  |  Corsair RM850x  |  Corsair H100i RGB Pro XT  |  Corsair 4000D  |  LG 27GP850  |

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, decolon said:

IIRC buildzoid came to the conclusion that it was manufacturers not using the protections on the voltage controllers allowing the cards to draw basically unlimited power that killed the gpus

and yet for some reason, it's only 1 game that is pushing them past these "unlimited" power limits to the point of breaking them. i would say both are at fault, with more attributed to the game

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

EVGA literally admitted it was bad card design. Yet "Game killing GPU's again" news are spawning all over the place... again... sigh

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×