Jump to content

Visual Studio CPU Hardware recommendations

Go to solution Solved by igormp,
6 minutes ago, Agall said:

I don't mention RAM because those systems already have 32GB of RAM and don't get close to utilizing all of it.

If that's the case then it's all good. I asked because even 32gb might not be enough for some projects, while others can do fine with even 8~16gb.

7 minutes ago, Agall said:

If any of the hardware acceleration features with encoding or CUDA are related to programs which use those and not to the application itself, then that confirms my suspicions that they simply need a more capable CPU.

Yeah, GPU is going to be pretty much irrelevant.

8 minutes ago, Agall said:

I don't think they'd be the ones experimenting with AI, that would likely be me. The suggestion of an RTX 3050 6GB being apart of that, including as a display output if it made sense to skip the iGPU.

I'd say to skip on a dGPU and use that money for a better CPU, such as the 7950x that was previously mentioned. That's going to be a way better investment.

8 minutes ago, Agall said:

Does VS properly multicore when it compiles, or is it largely single threaded?

Depends on how the project is set up, but it should spawn as much jobs as possible when compiling (talking more specifically about Cpp, I'm not really keen into C# stuff).

Multi threaded performance is often better than single perf for such workloads.

Hello all,

 

I've got two developers with Dell OEM i9-10900 SFF systems that clearly aren't keeping up (I didn't buy them nor would've, all I'll say about that).

 

 

Looking to build DIY rigs to replace it, which I've already done on some workstations and servers. There's a mix of LGA 1700 and LGA 1718 for these machines, so I'm not partial to either.

 

 

Where it gets complicated is with Visual Studio performance specifically, and whether there's specific advantages between Radeon or Intel iGPUs. Also, whether its worth just getting an F SKU Intel CPU and dropping in an RTX 3050 6GB or such.

 

I do see Visual Studio has a CUDA package, but I'm not sure if this is worth the extra complexity. 

 

Currently, I'm looking at the 14700k and Asus Prime Z790M-Plus (though it doesn't have triple display outputs) in a mATX standard configuration build, likely a Fractal Design R5 Silent. I wouldn't be overclocking, likely TDP limiting them, but I'd rather have the binning and clock speed boost from the K SKU CPU compared to the non-K, especially if they current cost the same.

 

I do really like the Asus Pro CSM series boards, but there's no VRM heatsinks which might be a problem for these systems. Its also a 125W TDP limited board, which honestly isn't a problem. 125W is the sweet spot for Intel regarding performance/watt anyways. 13700k is also a worthy contender which is about $60 less than the 14700k for not much less performance.

 

Anyone strongly familiar with hardware recommendations for Visual Studio and some of the potential features? Would it even be worth looking at an RTX 4060 8GB if there's benefits to CUDA and RTX around the corner for VS?

 

 

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

Visual Studio is essentially a glorified text editor. What kind of performance issues are you facing?

 

If you're struggling with compiling whatever you develop there, then it will depend critically on what you are doing with it. I mean, a Python "Hello world" project done in VS will fly in potato-level hardware... Conversely, compiling an office suite from source will take a bit longer, with or without VS...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, SpaceGhostC2C said:

Visual Studio is essentially a glorified text editor. What kind of performance issues are you facing?

 

If you're struggling with compiling whatever you develop there, then it will depend critically on what you are doing with it. I mean, a Python "Hello world" project done in VS will fly in potato-level hardware... Conversely, compiling an office suite from source will take a bit longer, with or without VS...

Its not just large VS compiles but also a suite of other production applications on triple displays. VS is just the only one I'm unfamiliar with regarding performance characteristics. Basic Google search regarding CUDA had something to show, but I'm not sure if that's worth exploring. My personal experience with VS is limited. The rest of the applications don't require any crazy rasterization performance and just need single/multicore CPU.

 

Regarding the current systems, its a combination of the suite of applications (which includes VS) that's causing these OEM systems to scream constantly. Its quite obviously a CPU multithreading limitation, but I now have a reason to upgrade these machines to fully min-max their needs.

 

If that involves a dGPU, then sure, but it doesn't seem necessary for VS. What is questionable though is the difference between Radeon and Intel iGPUs that might matter.

 

So far, a 13700k at 125W TDP with the Asus Pro 760M CSM seems to be the best route. I've got the B660 version of this board in a workstation and absolutely love it. I'm just looking to confirm whether there's not a better option regarding the iGPU/dGPU.

 

Pro B760M-CT-CSM|Motherboards|ASUS USA

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

What are they developing? And in what language?
If they do anything that leverages CUDA, NV is the way to go.
If compile times are through the roof, you want more cores. You could go workstation CPUs, but if you're staying in consumer chips, I'd recommend the 5/7950X

5950X/3080Ti primary rig  |  1920X/1070Ti Unraid for dockers  |  200TB TrueNAS w/ 1:1 backup

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, OddOod said:

What are they developing? And in what language?
If they do anything that leverages CUDA, NV is the way to go.
If compile times are through the roof, you want more cores. You could go workstation CPUs, but if you're staying in consumer chips, I'd recommend the 5/7950X

There no particular advantage of Intel Quicksync? I don't believe the 7950x has a dedicated encoder, if that's necessary.

 

Regarding VS and CUDA, that would involve the actual application being written being able to take advantage of CUDA hardware, if I'm understanding that correctly. I don't believe they write anything that uses CUDA and I believe only use C++ in VS.

 

I have a server that runs a 7950x and its hilariously good. The same Asus Pro CSM board I like has an AM5 counterpart. The problem being that 7950x's are in excess of $650 and the next Ryzen offering, the 7900x, is beat out by the 13700k/14700k.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Agall said:

particular advantage of Intel Quicksync

You'd have to look at the specific programs they are using for development. We're still pretty much in the dark here. 

5950X/3080Ti primary rig  |  1920X/1070Ti Unraid for dockers  |  200TB TrueNAS w/ 1:1 backup

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Agall said:

There no particular advantage of Intel Quicksync? I don't believe the 7950x has a dedicated encoder, if that's necessary.

 

Regarding VS and CUDA, that would involve the actual application being written being able to take advantage of CUDA hardware, if I'm understanding that correctly. I don't believe they write anything that uses CUDA and I believe only use C++ in VS.

 

I have a server that runs a 7950x and its hilariously good. The same Asus Pro CSM board I like has an AM5 counterpart. The problem being that 7950x's are in excess of $650 and the next Ryzen offering, the 7900x, is beat out by the 13700k/14700k.

Are you working with video? Quicksync only is used when encoding/decoding video. A task I dont see super common for most development tasks, but really depends. AMD also has encoders/decorders on their iGPUS that do the same.

 


What is the usage reported in task manager when the programers are working? DO you see high CPU, Disk, Memory, GPU or other usage? I'd guess a 10900 is a pretty good chip still and don't think your gonna see a massive performance improvement going to current hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, OddOod said:

You'd have to look at the specific programs they are using for development. We're still pretty much in the dark here. 

They're all in house developed in C# or C++. So, nothing really crazy in terms of complexity but do get relatively large from what I've seen. Considering the background applications are already a struggle without compiling code in the foreground, you can imagine how slow a compile could be.

 

The i9-10900's they're using now being in SFF Dell OEM systems are severely thermally and TDP limited, while factoring in that anything 13th/14th gen is markedly more capable in single and multithreading.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

Are you working with video? Quicksync only is used when encoding/decoding video. A task I dont see super common for most development tasks, but really depends. AMD also has encoders/decorders on their iGPUS that do the same.

 


What is the usage reported in task manager when the programers are working? DO you see high CPU, Disk, Memory, GPU or other usage? I'd guess a 10900 is a pretty good chip still and don't think your gonna see a massive performance improvement going to current hardware.

The i9-10900 in a fully featured rig would likely be fine, but they're Dell SFF systems that would probably struggle to handle the i5 variant. AKA, shouldn't have been purchased for users that are regularly in high CPU utilization scenarios. They already dusted what little dust was in the machine, and they might need a repaste, but I'd rather upgrade those workstations and use those systems for less intense use-cases.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Agall said:

I've got two developers with Dell OEM i9-10900 SFF systems that clearly aren't keeping up (I didn't buy them nor would've, all I'll say about that).

 

How much RAM do those systems have? I'd guess that they're just running out of RAM instead of CPU, keeping a close look at how they use their systems would make it easier to help you with that.

48 minutes ago, Agall said:

and whether there's specific advantages between Radeon or Intel iGPUs. Also, whether its worth just getting an F SKU Intel CPU and dropping in an RTX 3050 6GB or such.

Not really, no need for a dedicated GPU, just an integrated one will do the job, unless they work with something GPU-related.

48 minutes ago, Agall said:

I do see Visual Studio has a CUDA package, but I'm not sure if this is worth the extra complexity. 

 

That's if you want to work with CUDA, not that VS will make use of CUDA.

49 minutes ago, Agall said:

Anyone strongly familiar with hardware recommendations for Visual Studio and some of the potential features?

Funny how you haven't mentioned RAM in any point 😛 

49 minutes ago, Agall said:

Would it even be worth looking at an RTX 4060 8GB if there's benefits to CUDA and RTX around the corner for VS?

I'd say no. There are no benefits for CUDA or RTX, it's just a text editor, not a Triple A game, video editor or 3D software lol

 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, igormp said:

How much RAM do those systems have? I'd guess that they're just running out of RAM instead of CPU, keeping a close look at how they use their systems would make it easier to help you with that.

Not really, no need for a dedicated GPU, just an integrated one will do the job, unless they work with something GPU-related.

That's if you want to work with CUDA, not that VS will make use of CUDA.

Funny how you haven't mentioned RAM in any point 😛 

I'd say no. There are no benefits for CUDA or RTX, it's just a text editor, not a Triple A game, video editor or 3D software lol

 

I don't mention RAM because those systems already have 32GB of RAM and don't get close to utilizing all of it. Their issue with the current systems is related to severe thermal design limitations and therefore multicore performance. I'm mostly trying to confirm whether or not there's any reasons beside higher single/multicore and memory performance to add other features.

 

If any of the hardware acceleration features with encoding or CUDA are related to programs which use those and not to the application itself, then that confirms my suspicions that they simply need a more capable CPU.

 

I can easily get both their systems to 64GB with a 2x32GB kit instead to mitigate any future limitations that might occur, especially since its only another $100 or so. 

 

I don't think they'd be the ones experimenting with AI, that would likely be me. The suggestion of an RTX 3050 6GB being apart of that, including as a display output if it made sense to skip the iGPU.

 

Does VS properly multicore when it compiles, or is it largely single threaded?

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Agall said:

I don't mention RAM because those systems already have 32GB of RAM and don't get close to utilizing all of it.

If that's the case then it's all good. I asked because even 32gb might not be enough for some projects, while others can do fine with even 8~16gb.

7 minutes ago, Agall said:

If any of the hardware acceleration features with encoding or CUDA are related to programs which use those and not to the application itself, then that confirms my suspicions that they simply need a more capable CPU.

Yeah, GPU is going to be pretty much irrelevant.

8 minutes ago, Agall said:

I don't think they'd be the ones experimenting with AI, that would likely be me. The suggestion of an RTX 3050 6GB being apart of that, including as a display output if it made sense to skip the iGPU.

I'd say to skip on a dGPU and use that money for a better CPU, such as the 7950x that was previously mentioned. That's going to be a way better investment.

8 minutes ago, Agall said:

Does VS properly multicore when it compiles, or is it largely single threaded?

Depends on how the project is set up, but it should spawn as much jobs as possible when compiling (talking more specifically about Cpp, I'm not really keen into C# stuff).

Multi threaded performance is often better than single perf for such workloads.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, igormp said:

If that's the case then it's all good. I asked because even 32gb might not be enough for some projects, while others can do fine with even 8~16gb.

Yeah, GPU is going to be pretty much irrelevant.

I'd say to skip on a dGPU and use that money for a better CPU, such as the 7950x that was previously mentioned. That's going to be a way better investment.

Depends on how the project is set up, but it should spawn as much jobs as possible when compiling (talking more specifically about Cpp, I'm not really keen into C# stuff).

Multi threaded performance is often better than single perf for such workloads.

I believe that gives me the information I was seeking. Their workload appears to not benefit from any GPU specific features.

 

I'll still consider the 7950x, its just hard to argue when its still $650. We have a server I built who's application is entirely single threaded and wants at least 16 threads, it basically processes jobs on one core per user, so I went with the 7950x. It sits at 5.5GHz since its got Windows Server 2019 installed on it and the application doesn't trigger P-state, so I had to set it to 'performance' mode to always have it boosting.

 

I hesitate on the 7900x since the 13700k/14700k are effectively better. I'll likely be transplanting the M.2 drives currently used in the system, so I'd rather have less potential conflicts as well, aka sticking with Intel. I'm not looking for the absolute best but somewhere close to that, so the 13700k falls into a suite spot for price/performance at $345 right now on multicore. The 7900x being $420 comparably.

 

I can also get the B760 version of the Asus Pro CSM board I really like, I only worry about RAM compatibility since I imagine these applications would benefit from having a nice CL30 6000MHz kit (which might not work on that board).

 

I obviously don't write code, only knowing enough to do basic stuff through guides. I'm mostly a hardware/OS guy and luckily we have developers who've done that job as long as I've been alive. They're just as equally ignorant to hardware as I am to VS and coding though.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Agall said:

I hesitate on the 7900x since the 13700k/14700k are effectively better.

Afaik, for compile jobs, those are pretty similar, so it ends up more of pricing matter.

14 minutes ago, Agall said:

13700k falls into a suite spot for price/performance at $345 right now on multicore. The 7900x being $420 comparably.

Yeah, there isn't much to discuss about that lol

14 minutes ago, Agall said:

I imagine these applications would benefit from having a nice CL30 6000MHz kit (which might not work on that board).

They don't. You could even just go with DDR4 and same some bucks/reuse the previous sticks.

Take a look at the first 3 benchmarks here:

https://www.phoronix.com/review/linux-ddr5-6000/3

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, igormp said:

Afaik, for compile jobs, those are pretty similar, so it ends up more of pricing matter.

Yeah, there isn't much to discuss about that lol

They don't. You could even just go with DDR4 and same some bucks/reuse the previous sticks.

Take a look at the first 3 benchmarks here:

https://www.phoronix.com/review/linux-ddr5-6000/3

I think I like that B760 CSM board enough to just end up with a decent kit of 4800MHz RAM. What will be the point of contention will be the specific Intel CPU I choose. Unless prices drop like crazy on the 7950x and I can get the B650 version of that board, I might end up with that anyways. This might be a few months, but at least I know precisely what direction to take.

 

ASUS Pro B650M-CT-CSM AM5 Micro-ATX Motherboard PRO B650M-CT-CSM (bhphotovideo.com)

 

This also allows me to potentially use ECC, although I don't think that's necessary. Intel gatekeeps that behind their Q chipsets, which Asus does have a version of.

 

Pro Q670M-C-CSM|Motherboards|ASUS USA

 

If I were building another server, I'd go with one of these options. Its hilarious how much cheaper a DIY server like this is comparably to an OEM Dell one, and how terrible the performance/$ is. There's some servers I will always use OEM Dell for simply for the security of it, but there's plenty that don't need that.

 

Their rigs run Quadro P620's, but there's no reason for them to beside extra display outputs. None of their workload uses CUDA or any encoders which I was able to confirm. The VS pages at least one of the developers are using were only taking up 400MB of RAM, but he was sitting at 18GB of RAM with his normal background tasks. I'll likely end up just getting 2x32GB kits.

 

CORSAIR Vengeance 64GB (2 x 32GB) 288-Pin PC RAM DDR5 4800 (PC5 38400) Desktop Memory Model CMK64GX5M2A4800C40 - Newegg.com

 

If memory performance isn't crazy important on VS, then it won't matter that much on the rest of their applications. a CL40 4800MHz kit for only $170 is pretty great imo.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Agall said:

This also allows me to potentially use ECC, although I don't think that's necessary.

Yeah, I don't think it's relevant for that scenario either.

24 minutes ago, Agall said:

Its hilarious how much cheaper a DIY server like this is comparably to an OEM Dell one, and how terrible the performance/$ is. There's some servers I will always use OEM Dell for simply for the security of it, but there's plenty that don't need that.

You pay OEMs for the support and whatnot, the hw cost itself is not really significant in the great scheme of things, whereas downtime is.

 

25 minutes ago, Agall said:

The VS pages at least one of the developers are using were only taking up 400MB of RAM, but he was sitting at 18GB of RAM with his normal background tasks.

How does it look like when they fire up a build job?

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, igormp said:

Yeah, I don't think it's relevant for that scenario either.

You pay OEMs for the support and whatnot, the hw cost itself is not really significant in the great scheme of things, whereas downtime is.

 

How does it look like when they fire up a build job?

Fun part about Dell support is how it really doesn't fix downtime since you're still waiting for parts/service. Its the easiest argument against their desktops since its far faster to service a DIY desktop than a proprietary Dell machine. Overall, even if you had to replace every part, it would be cheaper to buy a whole set of spare parts than buy a Dell desktop if you were to match performance.

 

The server argument is a bit more complicated, but troubleshooting a standard desktop is far more straightforward than troubleshooting a lot of these servers if you're running into severe issues. I feel like the goal is to never have to troubleshoot them which is part of what you pay for. 

 

Regarding the dev's workstations, I'll be collecting more data to better argue the necessity to spend the $ (which is <$900 per machine, dramatically lower than OEM machines), but I believe I found what I needed regarding hardware requirements.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Agall said:

Does VS properly multicore when it compiles, or is it largely single threaded?

It is arbitrarily threaded. If you want more than 32 you do need to tick a box somewhere. 

5950X/3080Ti primary rig  |  1920X/1070Ti Unraid for dockers  |  200TB TrueNAS w/ 1:1 backup

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Agall said:

Fun part about Dell support is how it really doesn't fix downtime since you're still waiting for parts/service.

First place I worked as a real professional had a wicked good service contract with Dell. ~900 people on site and if anything happened to our computers, they'd just SSD swap to an identical sku, then send the defective unit in for a 1:1 replacement

5950X/3080Ti primary rig  |  1920X/1070Ti Unraid for dockers  |  200TB TrueNAS w/ 1:1 backup

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, OddOod said:

First place I worked as a real professional had a wicked good service contract with Dell. ~900 people on site and if anything happened to our computers, they'd just SSD swap to an identical sku, then send the defective unit in for a 1:1 replacement

Transplanting the boot drive is easily done, I've done it hundreds of times at this point. Its just as easy to have spare systems on hand to do that.

 

OEM systems were great about a decade ago, when they used standard ATX power supplies and standard ATX motherboards+cases. They don't do that now, and you can't just go snag a power supply off your local Best Buy's shelf to get the system up and running again.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Agall said:

Fun part about Dell support is how it really doesn't fix downtime since you're still waiting for parts/service. Its the easiest argument against their desktops since its far faster to service a DIY desktop than a proprietary Dell machine. Overall, even if you had to replace every part, it would be cheaper to buy a whole set of spare parts than buy a Dell desktop if you were to match performance.

 

The server argument is a bit more complicated, but troubleshooting a standard desktop is far more straightforward than troubleshooting a lot of these servers if you're running into severe issues. I feel like the goal is to never have to troubleshoot them which is part of what you pay for. 

I guess that depends on your contract, if it's a simpler one then it may not be worth it. If you already have an IT team on-site then it's also not worth it to pay that premium.

 

But from another POV: if you don't have/need an IT team, you can almost solely rely on their support instead of swallowing the cost of an employee (which may be way larger than the OEM premium), so that's something to take into account.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×