Jump to content

[Rumor] Nvidia planning a Pascal refresh?

Zeeee

so apparently Nvidia is planning a pascal refresh for 2017. 

 

Quote

Micron’s GDDR5X would be featured across the new Pascal lineup, except the entry level, GP107 core. GDDR5X is currently being used on the GeForce GTX 1080, GeForce GTX Titan X (Pascal) and Tesla P40 (ECC). With the coming refresh, Pascal based cards would have GDDR5X on board. Most of the high-end products launching under the refresh would be based on GP102 GPU. With GK110, NVIDIA offered several variants that included the GTX 780, GTX Titan, GTX 780 Ti and GTX Titan Black. We could see a similar trend with Pascal refresh.

 

So this rumor makes a lot of sense due to the lack of GDDR5X on the 1000 series cards it would make sense for it to exist in more numbers next year and apparently Samsung will be providing them the 14nm finfet.

 

My opinion: I dont like it at all, i dont seem to understand the GPU market at all, why is it that gtx 970 released in 2014 and 1070 in 2016 that now all of a sudden well get the 1170 only a year later.... that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

 

http://www.game-debate.com/news/21446/nvidia-planning-14nm-pascal-refresh-with-higher-clock-speeds-in-2017

 

http://wccftech.com/nvidia-pascal-volta-gpu-leaked-2017-2018/

Link to comment
Share on other sites

Link to post
Share on other sites

Eh, that's how things used to be, before the great GPU-progression slow-down caused by the 360 / PS3 consoles, due to developers targeting all their games for the same hardware for 6+ years. I like it to be that way, too, because it means that graphics in games will get better at a faster rate. I don't want to be playing the same level of graphics for years in a row, again.

 

Another way of looking at it is this: In 5, or 10 years' time, you can have the games you play be of a certain graphics quality, or, you can have the games you play be of double that graphics quality: Which do you prefer? I'll take the double-quality option. IMO, the goal with tech should be to get as much improvement accomplished as fast as possible, so that things are even better later on.

 

But, Pascal was a refresh of Maxwell. So, a Pascal refresh would be a Maxwell re-refresh. Either way, I'll probably aim to sell my 1070 when the new cards release, and get one of them.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Delicieuxz said:

Eh, that's how things used to be, before the great GPU-progression slow-down caused by the 360 / PS3 consoles, due to developers targeting all their games for the same hardware for 6+ years. I like it to be that way, too, because it means that graphics in games will get better at a faster rate. I don't want to be playing the same level of graphics for years in a row, again. Another way of looking at it is this: In 5, or 10 years' time, you can have the games you play be of a certain graphics quality, or, you can have the games you play be of double that graphics quality: Which do you prefer? I'll take the double-quality option.

 

But, Pascal was a refresh of Maxwell. So, a Pascal refresh would be a Maxwell re-refresh. Either way, I'll probably aim to sell my 1070 when the new cards release, and get one of them.

But Pascal isnt a refresh of Maxwell though lol, its ipc for core to core may be the same but its NOT the same and is definitely NOT a refresh it has a lot of things in it like hardware async compute in it etc, 

 

Quote

Nope. Pascal adds in a dynamic load-balancing scheduler that safely enables asynchronous compute, and also adds simultaneous multi-projection that can reduce the amount of vertex shading work that must be done to do multi-monitor correctly or VR if the application can target it. The dynamic load-balancing scheduler allows multiple jobs to safely run on the shaders in parallel and can resize tasks on the fly to maximize utilization, especially when a job finishes and no other jobs can take the finished job's place. This scheduler is better than Maxwell's scheduler that cannot do any dynamic resizing at all, so it leaves resources idle if it attempts to do asynchronous compute, slowing down that GPU rather than speeding it up. Therefore, the driver disables that ability in Maxwell and emulates asynchronous compute. Pascal runs asynchronous compute in hardware, but runs all tasks at equal priority like first and second generation GCN does. If a high priority task rolls in, the drivers for Pascal, first generation GCN, and second generation GCN will tell the GPUs to preempt all tasks out of the GPU to make way for the high priority tasks. Pascal's preemption hardware saves state to the video memory, so is slower than GCN which has state caches to save state to when preemption must take place. Third generation GCN with a recent driver and fourth generation GCN can run asynchronous compute with priority in hardware, so those architectures can make the high priority tasks hog the GPU as much as possible and give the lower priority tasks whatever is left over. Pascal, first generation GCN, and second generation GCN must wait until the high priority tasks are complete before restoring the old tasks back to the GPU.
Another thing showing that Pascal is not just a refresh and node shrink is that Pascal requires that CUDA code be compiled to CUDA toolkit 8.0 and not earlier to run. Code compiled to previous versions of the CUDA Toolkit will not run on Pascal. (OpenCL had an advantage in that OpenCL programs compiled to Nvidia's OpenCL library would run on Pascal from day one, while Nvidia waited to September 28 which is today in my time zone as of this writing to release CUDA Toolkit 8.0.)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Zeeee said:

But Pascal isnt a refresh of Maxwell though lol, its ipc for core to core may be the same but its NOT the same and is definitely NOT a refresh it has a lot of things in it like hardware async compute in it etc, 

I thought the async compute in the 10XX lineup is software-based. Also, I haven't seen GTX 10XX benefit in DX12 games... they seem to lose performance over DX11, which makes their async implementation moot.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Delicieuxz said:

I thought the async compute in the 10XX lineup is software-based. Also, I haven't seen GTX 10XX benefit in DX12 games... they seem to lose performance over DX11, which makes their async implementation moot.

Nope unfortunately thats a MASSIVE propoganda scheme by amd loving fanboys lmao its 100% async compute capable hardware here ill link you to the best explanation.

 

https://www.reddit.com/r/nvidia/comments/50dqd5/demystifying_asynchronous_compute/

 

if you want to read all that or not if its TLDR its basically pascal has full async compute hardware it just DOES it differently than AMD but that doesnt mean it doesnt work and also if you check out pascal timespy youll see that performance increases with async on vs off

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Zeeee said:

But Pascal isnt a refresh of Maxwell though lol, its ipc for core to core may be the same but its NOT the same and is definitely NOT a refresh it has a lot of things in it like hardware async compute in it etc, 

 

 

The IPC isn't even the same is it?  Maxwell scales better clock to performance.  

 

Personally I find Pascal extremely underwhelming when it comes to DX12/Vulkan.  Performance is pretty good but I kind of expected more from 14nm.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, sgloux3470 said:

The IPC isn't even the same is it?  Maxwell scales better clock to performance.  

 

Personally I find Pascal extremely underwhelming when it comes to DX12/Vulkan.  Performance is pretty good but I kind of expected more from 14nm.

 What do you mean? The DX 12 performance the reason why youre underwhelmed you need to understand the background first, basically this is how it went down in dx 11 in simple terms okay you know how theres cpu overhead in dx 11 and thats why we needed dx 12 to alleviate cpu bound processing and offload it to gpu? Well basically the reason why dx 12 nvidia doesnt benefit a lot vs dx 11 is because their OVERHEAD was so damn fantastic on dx11. Where as AMD they had terrible drivers for dx 11 and thats why on dx 12 the relative jump is insane basically its like getting a titan x pascal gpu with an old fx processor and then upgrading the processor to skylake, you see a huge bottleneck alleviate vs nvidia where its as if they always had a good cpu for ease of explanation.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Delicieuxz said:

I thought the async compute in the 10XX lineup is software-based. Also, I haven't seen GTX 10XX benefit in DX12 games... they seem to lose performance over DX11, which makes their async implementation moot.

No, it's just not on the per-thread level like AMD's is. It works on the SMM level where SMMs can be reassigned and pre-empted tasks. Though to be fair, Nvidia's pre-emption is better than AMD's where VR is concerned as time warp benchmarks and VR game performance have shown.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

There are several methods of doing ASYNC. The thing is, Nvidia is able to do one of them, AMD all of them. Each has advantages and disadvantages.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Zeeee said:

-snip/

Hardware Async compute is handled by hardware compute engines called ACEs. These form the basis of every AMD GCN card. NVidia does not have them, Async compute is NOT, on Nvidia, it simply isn't. They instead have advanced context switching. Which is very hard to program for and only increases FPS, while proper Async improves FPS and reduces latency. 

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Citadelen said:

Hardware Async compute is handled by hardware compute engines called ACEs. These form the basis of every AMD GCN card. NVidia does not have them, Async compute is NOT, on Nvidia, it simply isn't. They instead have advanced context switching. Which is very hard to program for and only increases FPS, while proper Async improves FPS and reduces latency. 

Why dont you read the link i posted first? It may not have ACE engines doesnt matter theres more than one way to get the desired effect. Read https://www.reddit.com/r/nvidia/comments/50dqd5/demystifying_asynchronous_compute/

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, Zeeee said:

that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

For me (and for literally all companies using computers for CAD/DCC usage models and supercomputing), I often don't look for the next big thing until that time comes, because I don't often buy the latest game with state-of-the-art graphics (I would love to though).

 

Companies that uses computers for what I've already listed, they don't look for the next big, they're looking for long-term service life and maximum return on investment, even if it goes a few years outdated compared to newer hardware, as downtime is too costly and would require them to temporarily shut their doors when making the upgrade to the latest hardware. In terms of service life, they simply cannot afford even an hour of downtime, as it can and will result in millions of dollars lost to downtime.

RIGZ

Spoiler

Starlight (Current): AMD Ryzen 9 3900X 12-core CPU | EVGA GeForce RTX 2080 Ti Black Edition | Gigabyte X570 Aorus Ultra | Full Custom Loop | 32GB (4x8GB) Dominator Platinum SE Blackout #338/500 | 1TB + 2TB M.2 NVMe PCIe 4.0 SSDs, 480GB SATA 2.5" SSD, 8TB 7200 RPM NAS HDD | EVGA NU Audio | Corsair 900D | Corsair AX1200i | Corsair ML120 2-pack 5x + ML140 2-pack

 

The Storm (Retired): Intel Core i7-5930K | Asus ROG STRIX GeForce GTX 1080 Ti | Asus ROG RAMPAGE V EDITION 10 | EKWB EK-KIT P360 with Hardware Labs Black Ice SR2 Multiport 480 | 32GB (4x8GB) Dominator Platinum SE Blackout #338/500 | 480GB SATA 2.5" SSD + 3TB 5400 RPM NAS HDD + 8TB 7200 RPM NAS HDD | Corsair 900D | Corsair AX1200i + Black/Blue CableMod cables | Corsair ML120 2-pack 2x + NB-BlackSilentPro PL-2 x3

STRONK COOLZ 9000

Spoiler

EK-Quantum Momentum X570 Aorus Master monoblock | EK-FC RTX 2080 + Ti Classic RGB Waterblock and Backplate | EK-XRES 140 D5 PWM Pump/Res Combo | 2x Hardware Labs Black Ice SR2 480 MP and 1x SR2 240 MP | 10X Corsair ML120 PWM fans | A mixture of EK-KIT fittings and EK-Torque STC fittings and adapters | Mayhems 10/13mm clear tubing | Mayhems X1 Eco UV Blue coolant | Bitspower G1/4 Temperature Probe Fitting

DESK TOIS

Spoiler

Glorious Modular Mechanical Keyboard | Glorious Model D Featherweight Mouse | 2x BenQ PD3200Q 32" 1440p IPS displays + BenQ BL3200PT 32" 1440p VA display | Mackie ProFX10v3 USB Mixer + Marantz MPM-1000 Mic | Sennheiser HD 598 SE Headphones | 2x ADAM Audio T5V 5" Powered Studio Monitors + ADAM Audio T10S Powered Studio Subwoofer | Logitech G920 Driving Force Steering Wheel and Pedal Kit + Driving Force Shifter | Logitech C922x 720p 60FPS Webcam | Xbox One Wireless Controller

QUOTES

Spoiler

"So because they didn't give you the results you want, they're biased? You realize that makes you biased, right?" - @App4that

"Brand loyalty/fanboyism is stupid." - Unknown person on these forums

"Assuming kills" - @Moondrelor

"That's not to say that Nvidia is always better, or that AMD isn't worth owning. But the fact remains that this forum is AMD biased." - @App4that

"I'd imagine there's exceptions to this trend - but just going on mine and my acquaintances' purchase history, we've found that budget cards often require you to turn off certain features to get slick performance, even though those technologies are previous gen and should be having a negligible impact" - ace42

"2K" is not 2560 x 1440 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zeeee said:

I dont like it at all, i dont seem to understand the GPU market at all, why is it that gtx 970 released in 2014 and 1070 in 2016 that now all of a sudden well get the 1170 only a year later.... that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

Progress is always good ! Your 1070 doesnt drop performance as soon as new gen comes out. If u dont like it its purely psychological. Every high end GPU is good for at least two years. It doesnt make much sense to upgrade yearly if u own high end gpu because for the performance you gain u overpay a lot. 

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, VanayadGaming said:

There are several methods of doing ASYNC. The thing is, Nvidia is able to do one of them, AMD all of them. Each has advantages and disadvantages.

I would not say AMD can do all of them based on the TimeSpy and Valve VR benchmarks.

 

1 hour ago, Zeeee said:

so apparently Nvidia is planning a pascal refresh for 2017. 

 

 

So this rumor makes a lot of sense due to the lack of GDDR5X on the 1000 series cards it would make sense for it to exist in more numbers next year and apparently Samsung will be providing them the 14nm finfet.

 

My opinion: I dont like it at all, i dont seem to understand the GPU market at all, why is it that gtx 970 released in 2014 and 1070 in 2016 that now all of a sudden well get the 1170 only a year later.... that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

 

http://www.game-debate.com/news/21446/nvidia-planning-14nm-pascal-refresh-with-higher-clock-speeds-in-2017

 

http://wccftech.com/nvidia-pascal-volta-gpu-leaked-2017-2018/

You bought at a bad time when APIs were switching. It's luck of the draw, like most things. Also, you could have bought a 390X.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Zeeee said:

its 100% async compute capable hardware here ill link you to the best explanation.

if you want to read all that or not if its TLDR its basically pascal has full async compute hardware it just DOES it differently than AMD but that doesnt mean it doesnt work and also if you check out pascal timespy youll see that performance increases with async on vs off

Pascal is not 100% compute capable, it requires some context switching (which adds latency) which while not as bad as maxwell, is not as good as AMDs solution (which has no latency when switching between compute) and is not 100% capable even though it supports async just fine

hello!

is it me you're looking for?

ᴾC SᴾeCS ᴰoWᴺ ᴮEᴸoW

Spoiler

Desktop: X99-PC

CPU: i7 5820k

Mobo: X99 Deluxe

Cooler: Dark Rock Pro 3

RAM: 32GB DDR4
GPU: GTX 1080

Storage: 1TB 850 Evo, 1TB HDD, bunch of external hard drives
PSU: EVGA G2 750w

Peripherals: Logitech G502, Ducky One 711

Audio: Xonar U7, O2 amplifier (RIP), HD6XX

Monitors: 4k 24" Dell monitor, 1080p 24" Asus monitor

 

Laptop:

-Overkill Dell XPS

Fully maxed out early 2017 Dell XPS 15, GTX 1050 4GB, 7700HQ, 1TB nvme SSD, 32GB RAM, 4k display. 97Whr battery :x 
Dell was having a $600 off sale for the fully specced out model, so I decided to get it :P

 

-Crapbook

Fully specced out early 2013 Macbook "pro" with gt 650m and constant 105c temperature on the CPU (GPU is 80-90C) when doing anything intensive...

A 2013 laptop with a regular sized battery still has better battery life than a 2017 laptop with a massive battery! I think this is a testament to apple's ability at making laptops, or maybe how little CPU technology has improved even 4+ years later (at least, until the recent introduction of 15W 4 core CPUs). Anyway, I'm never going to get a 35W CPU laptop again unless battery technology becomes ~5x better than as it is in 2018.

Apple knows how to make proper consumer-grade laptops (they don't know how to make pro laptops though). I guess this mostly software power efficiency related, but getting a mac makes perfect sense if you want a portable/powerful laptop that can do anything you want it to with great battery life.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, rattacko123 said:

Pascal is not 100% compute capable, it requires some context switching (which adds latency) which while not as bad as maxwell, is not as good as AMDs solution (which has no latency when switching between compute) and is not 100% capable even though it supports async just fine

AMD's solution is only better in that it makes up for AMD's driver deficiencies. It's hotter, more power-hungry, and actually it still doesn't match the latency of Nvidia's solution as Timespy and Valve's VR benchmarks reveal.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Zeeee said:

But Pascal isnt a refresh of Maxwell though lol, its ipc for core to core may be the same but its NOT the same and is definitely NOT a refresh it has a lot of things in it like hardware async compute in it etc, 

 

 

It does not feature hardware level Async compute scheduling.

Nvidia has NOT had hardware schedulers in their GPUs since Fermi, and is not likely to reintroduce them anytime soon.

 

Pascal has the same Async capability as Maxwell, however they tweaked the maxwell architecture so it would be much easier to utilize Async. For Maxwell to use Async, you need to code the game and the drivers perfectly, Pascal has been tweaked to be more lenient.

 

In practice, Pascal barely can run async, its not really "async", it cannot do it randomly out of order. Instead it requires a extremely granular software scheduler to micromanage the workload on the fly. It CAN NOT run any sort of compute + graphics without having a driver for it.

 

Pascal and Maxwell "Async" is JUST AS DRIVER DEPENDENT AS CROSSFIRE AND SLI IS OF HAVING A PROFILE

 

Pascal is a leap in the right direction, but you give it far too much credit for what it is.

Pascal is nothing more then, a highly optimized and refined version of Maxwell. What they did improve was mostly latency hotspots in the Maxwell pipeline design. By cutting down or modifying areas that caused latencies in the workflow.

They did add SOME new ICs, mostly aimed towards VR, such as SMP and some further tweaks to their CUDA cores.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Zeeee said:

-snip-

 

The very meaning of "Asynchronous", is out of sync. Context switching, no matter how accurate, is still Synchronous, and only, at the moment provides a performance, but not a frame time boost. AMD's ACEs provide both. Not only this but Nvidia's solution is very difficult and time-consuming to program for, even the slightest mistake leads to massive performance and latency problems.

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't understand what's the big deal about a refresh as long as the new cards are a tier faster or so.

 

I mean are we going to pretend this is some terrible decision by Green Team while Red Team constantly fucking does this and in fact still has "hawaii" refresh cards in the market right now in the form of the 390 and 390x among others?

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Zeeee said:

 What do you mean? The DX 12 performance the reason why youre underwhelmed you need to understand the background first, basically this is how it went down in dx 11 in simple terms okay you know how theres cpu overhead in dx 11 and thats why we needed dx 12 to alleviate cpu bound processing and offload it to gpu? Well basically the reason why dx 12 nvidia doesnt benefit a lot vs dx 11 is because their OVERHEAD was so damn fantastic on dx11. Where as AMD they had terrible drivers for dx 11 and thats why on dx 12 the relative jump is insane basically its like getting a titan x pascal gpu with an old fx processor and then upgrading the processor to skylake, you see a huge bottleneck alleviate vs nvidia where its as if they always had a good cpu for ease of explanation.

The FuryX doesn't gain 50% relative performance in DOOM from fucking CPU overhead.  Get real.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Zeeee said:

 that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

GPUs don't get outdated by newer GPUs, by but software, i.e. games. A card that runs current games fine will continue to run them fine even if they release a new GPU daily for the next three months. It's only when games get heavier that GPUs gradually become outdated. 

I think "outdated" is not the word you are looking for. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Zeeee said:

My opinion: I dont like it at all, i dont seem to understand the GPU market at all, why is it that gtx 970 released in 2014 and 1070 in 2016 that now all of a sudden well get the 1170 only a year later.... that really makes me nervous as i just bought a gtx 1070 and i would honestly feel jipped because its a long term investment not one thats to be outdated by a newer product class in one year. 

FFS, a yearly release cadence is normal. Geforce 600 series was 2012, 700 series was 2013, 900 series was 2014.

 

Also it's gypped, not jipped, and it's a racist term (refers to gypsies as crooks and thieves).

Link to comment
Share on other sites

Link to post
Share on other sites

Trying to understand this-

 

It seems like the refresh is not referring to the ti's, so for example will there be 1080, 1080(refresh), and 1080ti?  I consider the ti versions a refresh themselves, so are they just going to stop selling current 10XX series and replace them with faster clock versions and keep the names? Will it be the 11XX or 20XX series?

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

Pascal is based on Maxwell architecture. Volta is a new architecture.

If this rumor is true, then this probably because Nvidia needs more time for Volta as it won't make the estimated deadlines. New architectures always take a lot of time and delays are very much possible.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×