Jump to content

has AMD hit rock bottom?!

zMeul

This is where you blind fanboys mistake what is considered DX12 features. https://en.wikipedia.org/wiki/Direct3D#Direct3D_12

 

Asynchronous Compute is not a required DX12 feature. Sure, DX12 helped make it possible, but it is not required in order to call your card DX12 capable. Nor does it mean your card does not "Fully support DX12". What does fully support even mean?

 

 http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far/2

 

Yes. In this one specific game, AMD has an insane performance boost. It is great, glad to see it. People get more performance for their dollar, its a good thing for them. Does it mean it will be this way on every single DX12 title, even with Async Compute? No, not even close. As i stated before, this specific game in question is an RTS game that forgone deterministic simulation. A staple in every single RTS game. It is literally the best case scenario for Async Compute for AMD. I doubt there will ever be another game using Async Compute that will show this level of a performance boost. 

 

Now, before you get all high and mighty and hit me with another half-cocked cryptic response, let me remind you. Your idol company might be doing great at DX12, but take a look at the current DX12 games available, versus the current DX11 games available. I'd rather have a GPU that performs better on 90% of my total game library, than a GPU that performs better on 10% of my total games. That being said, by the time DX12 becomes mainstream enough in the AAA world, we should see improvements from both AMD and NVidia regarding DX12. More feature level support is bound to come from both sides in the future, on a broader range of cards. Remove those AMD blinders from your eyes, and become an intelligent consumer. 

 

It's not really though, most of the Async is for graphic rendering, and as it is it is not as much compute as they plan to utilize in the future, and MANY games will be able to make use of it as it gets more comfortably programmed for. Which it undoubtedly will as both consoles utilize GCN and such features will allow far more power and graphical fidelity than without. Lighting, particles, non-triangle graphical building blocks, etc.

Mahigan has continued to cover these developments on OC.net, the future and biggest improvements will be in async and parallelism on GPU, to the point some are talking about entire graphics engines that are compute only, apparently something touted for Larabee when it was being put on notice.

 

I'm not sure what he was talking about when he brought up DX12, but in the end the future for the gaming industry is going to incorporate DX12 and more Async Compute, not less. It gives the devs too many options and too much power in console titles to not make use of.

Link to comment
Share on other sites

Link to post
Share on other sites

even by your own admission Nvidia won't buy a company that's underwater so why would they buy the worse half of AMD? the logic you are trying to espouse is beyond me as if AMD gets into a position to where they are seriously trying to sell off its divisions Nvidia and Intel would be the farthest from is due to monopoly concerns EVEN if they were to buy the opposing segments (Nvidia takes CPU and Intel GPU). Not only that Intel is breaking pretty well into the GPU market if the numbers for the Skylake cpus with the edram are to be believed. and just because Nvidia buys the CPU division doesn't mean that Intel would honor an x86 license.

 

also I would put more money on MS and/or IBM looking at AMD over Intel and Nvidia

facepalm*

 

Nvidia would be buying the assets of a dissolved AMD. In other words, no baggage would follow like the managerial structure or the current engineers beyond the hand picked Nvidia would want, like Papermaster.

 

Intel isn't in the dGPU market, something buying Radeon would allow, and Nvidia buying the CPU portion lets it get into the iGPU market in ways ARM Tegra never could achieve.

 

Intel would have no choice. Without x86_64 it can't sell its CPUs. Do you people honestly think these posts through first? I'm ripping these to shreds on the first read-throughs. LTT used to be better than this.

 

I'd put 0 money on IBM. It's not a consumer-facing company. And just as well it has a true monopoly in mainframes.

 

Microsoft would be out-bid by Intel.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's not really though, most of the Async is for graphic rendering, and as it is it is not as much compute as they plan to utilize in the future, and MANY games will be able to make use of it as it gets more comfortably programmed for. Which it undoubtedly will as both consoles utilize GCN and such features will allow far more power and graphical fidelity than without. Lighting, particles, non-triangle graphical building blocks, etc.

Mahigan has continued to cover these developments on OC.net, the future and biggest improvements will be in async and parallelism on GPU, to the point some are talking about entire graphics engines that are compute only, apparently something touted for Larabee when it was being put on notice.

 

I'm not sure what he was talking about when he brought up DX12, but in the end the future for the gaming industry is going to incorporate DX12 and more Async Compute, not less. It gives the devs too many options and too much power in console titles to not make use of.

You are correct for the most part, but i still remain skeptical that we will see a performance gap this large in another async compute game. Now, i have not played AoTS, but from what i have seen, the physics in that game are taxing. Considering Async Compute can handle physics, lightning and even post processing, i have to believe that it does all of the heavy lifting in that specific game. 

 

That, and i just know how game devs work. One half of them are lazy indie devs that call their rubbish "art", the other half are studios that are rushed by their publishers to the point in which the products they release are only finished through DLC's. But hey, i could be wrong, and i hope i am. Anything that brings more performance out of hardware is always welcome from me.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Two massive issues with your rose-tinted view. The first is that consoles count for shit all. So little that Nvidia didn't even bid. There's no money in it. Second that the 200 series was actually really good and competitive compared with the 700 series, and it didn't matter. Even with the coin mining boom, it was the generation that saw AMD's market share fall below a quarter. AMD could absolutely dominate Nvidia in terms of their product but if no one buys their cards it won't mean a thing.

 

Personally I'm highly sceptical of the whole async compute thing. Firstly because there have been two benchmarks and two benchmarks only published about Dx 12 performance comparisons and they have utterly contradicted eachother. Seconly Nvidia's claim that the game does not properly utilise Dx 12 is somewhat corroborated by the absence of literally every feature touted by Microsoft.

 

We'll know the truth of it in time, but until then all there is is speculation.

 

Even if the whole async compute fiasco is legit, then games will just not use that feature. Why would any dev in their right mind lock out 76% of the market in favour of the 24%? Mind you, you're the same person who was arguing that Windows 10 exclusivity made sense so I'm not holding out hope for your business sense here.

 

Actually consoles counts for the majority sale of most big game devs (see spoiler image with market share in sales for Ubisoft). NVidia did not bid because they had nothing to offer as AMD could deliver an entire package (APU). Something NVidia is incapable of.

 

 

capture30quf5.png

 

Actually I believe the crypto currency mining frenzy was a huge problem for AMD. Non gamers bought up all the cards resulting in price gouging and lesser market share in the game world. Then when the market crashed to dedicated ASICS, the market was flooded with second hand cards. I doubt AMD benefited from any of it.

 

Async compute is already being used heavily in console games under development, so it's already a big thing. Not sure what benchmark you are referring to, but Ashes of the Singularity and the PS4 exclusive Tomorrow Children are showing impressive performance increase in Async compute on GCN. Here's the most demanding scene in Tomorrow Children:

 

 

TomorrowChildren33.jpg

 

As I've already said, Async Compute is already being used, especially on the consoles. And because these games are now designed and developed for the GCN architecture, these optimizations will be used on PC too of course. As Oxide games did with Ashes, they will probably have an NVidia specific code path that does not use async compute.

 

Rise of the Tomb Raider will use async compute and I'm sure Deus Ex Mankind Divided should do too.

 

Considering over 63% of pc users on steam right now has DX12 supporting hardware. Of all users over 16% has adopted Windows 10 in just 5 weeks. Remember that PC gaming is usually less than 30% of the entire gaming market for some of the big companies like Ubisoft and EA, so excluding parts of that small market share is not a problem if it cuts costs and improves performance and stability. That is probably also why EA could make Win10/DX12 a minimum spec for holiday 2016 titles like Mass Effect Andromeda.

My point was that a homogenous PC market is a massive benefit for the developers and as an extension, us the gamers.

 

This is where you blind fanboys mistake what is considered DX12 features. https://en.wikipedia.org/wiki/Direct3D#Direct3D_12

 

Asynchronous Compute is not a required DX12 feature. Sure, DX12 helped make it possible, but it is not required in order to call your card DX12 capable. Nor does it mean your card does not "Fully support DX12". What does fully support even mean?

 

 http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far/2

 

Yes. In this one specific game, AMD has an insane performance boost. It is great, glad to see it. People get more performance for their dollar, its a good thing for them. Does it mean it will be this way on every single DX12 title, even with Async Compute? No, not even close. As i stated before, this specific game in question is an RTS game that forgone deterministic simulation. A staple in every single RTS game. It is literally the best case scenario for Async Compute for AMD. I doubt there will ever be another game using Async Compute that will show this level of a performance boost. 

 

Indeed no cards are ever fully DXanything supported as certain things will be emulated or executed differently via hardware or drivers. What matters are the performance in the end. DX12 is not so much a new effects API, but a new low level API to give better access to hardware.

 

Actually that's exactly what it means. Async compute does not have to do with massive draw calls, physics or anything else, but can just be standard DX11 compute shaders, like most post processing, as lighting, texture mapping and all sorts of graphics stuff.

Ashes only gets a small performance boost. Async compute has a theoretical max performance increase of about 46% on GCN.

Async compute is becoming the de facto standard of console game devs, and that will certainly rub off to the same PC games as well.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

the cpu side apu line have no competition

No competition? AMD's getting torn to pieces by the Broadwell offerings, and Cannonlake graphics is coming to the Skylake E7 Xeons. The competition is quite strong. Where have you been?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

 

Indeed no cards are ever fully DXanything supported as certain things will be emulated or executed differently via hardware or drivers. What matters are the performance in the end. DX12 is not so much a new effects API, but a new low level API to give better access to hardware.

 

Actually that's exactly what it means. Async compute does not have to do with massive draw calls, physics or anything else, but can just be standard DX11 compute shaders, like most post processing, as lighting, texture mapping and all sorts of graphics stuff.

Ashes only gets a small performance boost. Async compute has a theoretical max performance increase of about 46% on GCN.

Async compute is becoming the de facto standard of console game devs, and that will certainly rub off to the same PC games as well.

I really do hope you are right. I'll take bonus performance from wherever i can get it. Even if it puts Nvidia in a bad spot, they have to improve or move out of the way. Seeing as they are probably already feeling some heat over it, my guess is they are already working to improve on the next generation. Either way, consumers will win.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You are correct for the most part, but i still remain skeptical that we will see a performance gap this large in another async compute game. Now, i have not played AoTS, but from what i have seen, the physics in that game are taxing. Considering Async Compute can handle physics, lightning and even post processing, i have to believe that it does all of the heavy lifting in that specific game. 

 

That, and i just know how game devs work. One half of them are lazy indie devs that call their rubbish "art", the other half are studios that are rushed by their publishers to the point in which the products they release are only finished through DLC's. But hey, i could be wrong, and i hope i am. Anything that brings more performance out of hardware is always welcome from me.

According to the dev the engine only consists of about 20% compute, not all of which is async, and they plan on the engine eventually being up to 50% compute derived. And I too believe the gap will not be so large all the time, but the shortcomings of the current nVidia cards in async compute cannot be made to disappear through drivers, there are known shortcomings of their scheduler when it comes to implementing these things.

 

Pascal is hoped to return to parity, but some fear it may be volta before async is properly supported in graphics engine. But, as it is nVidia themselves have said that more granular context switching is in the future, and the context switching they have just does not provide any benefit, the way theirs works, needing to be done outside the bounds of the graphics calls makes it hard to do the work at all, let alone out of sync and parallel with graphics.

 

I think nVidia will get things sorted out as fast as possible, but with the consoles being GCN and async compatible they are definitely favoring the wrong foot at the start of this race, and these early games are going to show the least sophisticated or pervasive use of the feature, and most games, especially those with concurrent console development will continue to make more and better use of it.

 

I think nVidia will be able to maintain stability but I doubt they will retake any of the overall disparity until they put out an arch that can properly implement Async. I think DX12 will eventually return to a neck and neck situation, AMD jettisons their horrific DX11 driver, nVidia loses their cheatsy DX11 optimizations, and they both wind up about even until the next arch's drop.

 

I'm still putting my money on the near future focus of AMD being VR and nVidia being either VR as well or focusing on repairing their DX12 performance.

Link to comment
Share on other sites

Link to post
Share on other sites

According to the dev the engine only consists of about 20% compute, not all of which is async, and they plan on the engine eventually being up to 50% compute derived. And I too believe the gap will not be so large all the time, but the shortcomings of the current nVidia cards in async compute cannot be made to disappear through drivers, there are known shortcomings of their scheduler when it comes to implementing these things.

 

Pascal is hoped to return to parity, but some fear it may be volta before async is properly supported in graphics engine. But, as it is nVidia themselves have said that more granular context switching is in the future, and the context switching they have just does not provide any benefit, the way theirs works, needing to be done outside the bounds of the graphics calls makes it hard to do the work at all, let alone out of sync and parallel with graphics.

 

I think nVidia will get things sorted out as fast as possible, but with the consoles being GCN and async compatible they are definitely favoring the wrong foot at the start of this race, and these early games are going to show the least sophisticated or pervasive use of the feature, and most games, especially those with concurrent console development will continue to make more and better use of it.

 

I think nVidia will be able to maintain stability but I doubt they will retake any of the overall disparity until they put out an arch that can properly implement Async. I think DX12 will eventually return to a neck and neck situation, AMD jettisons their horrific DX11 driver, nVidia loses their cheatsy DX11 optimizations, and they both wind up about even until the next arch's drop.

 

I'm still putting my money on the near future focus of AMD being VR and nVidia being either VR as well or focusing on repairing their DX12 performance.

Is Pitcairn (the console GPU) even GCN 1.0? How similar is it to say Tonga/Fiji? I doubt Nvidia will have Pascal be without async compute at the hardware level. Nvidia always designs for the here and now with no focus on future-proofing. If AC becomes important enough, Nvidia will include it, and it's not like its market analysts and engineers are stupid.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No competition? AMD's getting torn to pieces by the Broadwell offerings, and Cannonlake graphics is coming to the Skylake E7 Xeons. The competition is quite strong. Where have you been?

really ?

 

so... intel have something that beat the A8 7600 igpu "at the same price" ?

 

link me that product please  ^_^

Link to comment
Share on other sites

Link to post
Share on other sites

I really do hope you are right. I'll take bonus performance from wherever i can get it. Even if it puts Nvidia in a bad spot, they have to improve or move out of the way. Seeing as they are probably already feeling some heat over it, my guess is they are already working to improve on the next generation. Either way, consumers will win.

nVidia's latest drivers shows massive improvements in AotS benchmarks. In fact, beating out AMD.

 

http://www.guru3d.com/articles_pages/amd_radeon_r9_nano_review,26.html

 

post-193641-0-75035600-1442886749.png

 

 

http://www.hardwareluxx.com/index.php/reviews/hardware/vgacards/36513-reviewed-amd-radeon-r9-nano-4gb.html?start=20

 

post-193641-0-16254200-1442886711.jpg

 

Also Tom's linky here

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

Is Pitcairn (the console GPU) even GCN 1.0? How similar is it to say Tonga/Fiji? I doubt Nvidia will have Pascal be without async compute at the hardware level. Nvidia always designs for the here and now with no focus on future-proofing. If AC becomes important enough, Nvidia will include it, and it's not like its market analysts and engineers are stupid.

 

Pitcairn is gcn 1.0

 

and IIRC, the main difference between GCN 1.0 and 1.1 is the xdma engine enabling fingerless crossfire. between GCN 1.1 and 1.2 is mostly power stuff and tess perf, I think. I know that GCN 1.1 and GCN 1.2 based chips feature more ACE's than GCN 1.0 based chips, but I'm not sure if that's a function of the architecture.

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

really ?

 

so... intel have something that beat the A8 7600 igpu "at the same price" ?

 

link me that product please  :)

At the same price, no, but we all know Intel is asking high prices to give AMD wiggle room. Regardless, the strongest iGPU now belongs to Intel.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

then why dont you point out every flaw and wrong move intel and nvidia made too

I didn't!?!? you should check my posts instead of assuming things
Link to comment
Share on other sites

Link to post
Share on other sites

I didn't!?!? you should check my posts instead of assuming things

well from what i hear about you, you point out every flaw and wrong move amd has made, so it would only be fair if you did so with nvidia and intel too

Spoiler

My system is the Dell Inspiron 15 5559 Microsoft Signature Edition

                         The Austrailian king of LTT said that I'm awesome and a funny guy. the greatest psu list known to man DDR3 ram guide

                                                                                                               i got 477 posts in my first 30 days on LinusTechTips.com

 

Link to comment
Share on other sites

Link to post
Share on other sites

well from what i hear about you, you point out every flaw and wrong move amd has made, so it would only be fair if you did so with nvidia and intel too

again, stop attacking the messenger and have an opinion on the topic at hand, not at the user who posted it

if this would've been posted by someone else, how exactly would've mattered?!?!

Link to comment
Share on other sites

Link to post
Share on other sites

At the same price, no, but we all know Intel is asking high prices to give AMD wiggle room. Regardless, the strongest iGPU now belongs to Intel.

Plus the price/performance ratio.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Plus the price/performance ratio.

When you include the CPU performance in the totals of the SOCs, Intel still wins with the 5675C.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

When you include the CPU performance in the totals of the SOCs, Intel still wins with the 5675C.

With Intel your pretty much paying for what you get-which is also true for AMD-and why their APU are priced so cheaply.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Snip

Thank you for the tangible evidence with included source fellow Ohio resident. Much appreciated.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for the tangible evidence with included source fellow Ohio resident. Much appreciated.

Your tongue in cheek sycophancy is truly shameless.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

seriously, you should stop reading my posts

Hey, maybe you should stop creating biased news posts with cherry-picked quotes in them? I'd love for you to come up with an excuse on why you left out the part of the article that shows AMD is gaining. Oh wait, it'll basically render your misleading title as useless.

Once this bias stops, then that's when I'd stop caring about your posts. In the meantime, please create news posts without cherry picking them to suit your anti-AMD agenda.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hey, maybe you should stop creating biased news posts with cherry-picked quotes in them? I'd love for you to come up with an excuse on why you left out the part of the article that shows AMD is gaining. Oh wait, it'll basically render your misleading title as useless.

Once this bias stops, then that's when I'd stop caring about your posts. In the meantime, please create news posts without cherry picking them to suit your anti-AMD agenda.

Maybe you should stop making yourself look like an idiot.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Hey, maybe you should stop creating biased news posts with cherry-picked quotes in them? I'd love for you to come up with an excuse on why you left out the part of the article that shows AMD is gaining. Oh wait, it'll basically render your misleading title as useless.

Once this bias stops, then that's when I'd stop caring about your posts. In the meantime, please create news posts without cherry picking them to suit your anti-AMD agenda.

AMD isn't gaining. Its dGPU marketshare is down to 18%. In terms of its stock, it hasn't even fully recovered to the $2 it was at when I shorted them 6 months ago.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

you left out the part of the article that shows AMD is gaining

gaining!? what!? have you actually looked at the pictures I posted in the OP?!

the EPS is under 0 .. hello!

estimize-amd-fq3-2015-91cccce0560ba5b5.p

their reported EPS for Q2 2015 was -0.17 .. hello!! they are bleeding money !!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×