Jump to content

How to speed up my Software development workflow ? I'm using Visual Studio 2019 to build, compile, run tests, for dotnet c# docker containers etc

I am a software developer , primarily working with C#, dotnet and Visual Studio 2019 with extensions like Resharper enabled. Visual Studio with Resharper is a memory hog!
 
I currently have the following PC at home where I do my development work :
 
CPU - Ryzen 7 2700x
RAM - 2 x 16 gb 3000Mhz
Storage - 512 GB SSD - WD Blue
Motherboard - Asus Rog Strix B350M-i Gaming
GPU - Gigabyte GeForce Windforce GTX1080 8GB
OS: Win 10 Pro
 
With a few Firefox windows, and few instances of Visual Studio debugging docker containers 
I hit CPU utilization 100% and memory utilization 100% for 3-4 minutes. ? 
The machine came to a crawling speed, even mouse wasn't moving smoothly.
 
Also build-compile-debug-run tests could do with faster speeds... not good for productivity 
when I sit idle and watch build progress!! I would like to enable auto run unit tests on 
every code change save. To ensure I haven't broken anything and have rapid feedback loop.
 I can't run this effectively now because it slows down build-compile-run-test flow.
 
( I am also thinking of upgrading to 64GB RAM )
 
I am looking for an answer to the question --> 
Does Visual Studio build/compile/debug/ run test workflow benefit from Multicore CPUs 
or benefit from higher single core clock speeds ? ? 
 
how does these scores interpret/mean for workflow like mine ?
 
Ryzen 7 2700x has a score of 16,927.
Ryzen 9 3900x has a score of 31943 ( almost twice the score )
Ryzen 9 3950x has a score of 35702 ( more than double of 2700x, and around 11% mroe than 3900x)
 
Ryzen 9 3950x is 50% costlier than Ryzen 9 3900x , but the performance CPU Benchmark scores only a 11% increase. 
Is that synthetic score not relevant for my workflow ? Would having 8 extra cores ( as compared to 3900x ) is going to help me a lot ?
 
So would this mean if my build/compile takes around 60s now, getting a ryzen 3900x would make it near 30s?
 
Right now I have a SATA SSD Western Digital Blue, Would getting an NVME SSD help?
 
Thanks in advance people!!
 
Link to comment
Share on other sites

Link to post
Share on other sites

I wish I had your system as my daily driver at work. My work laptop is a Haswell processor with 16GB RAM and 1TB SSD. And I can have a quite a few tabs open in FF with sometimes a VM running in the background plus Visual Studio Enterprise with a few plugins. I'm a professional software engineer with over 20 years of experience. And I've worked with Visual Studio for nearly that 20 years, and I work primarily with C++ and C# with a little bit of PowerShell thrown in for good measure.

 

So let's get to the heart of your issue. First, core count and core speed both matter. Visual Studio will use multiple cores to build several files at once where dependencies allow. Memory is your friend here as well, but unless you're building massive projects - one of my solutions at work has over 70 projects in it - your builds are unlikely to run into any kind of memory ceiling. We can happily build the solution I mentioned on a dual-core virtual machine with... 4GB RAM I think.

 

And if you're expecting upgrading to a Ryzen 9 to cut your build times in half because it benchmarks at double the score of your processor, prepare to be sorely disappointed. Things don't work that way. Your CPU, memory, and storage will all play a role. The newer CPU will help, don't get me wrong, but it won't be this spectacular reduction in build times. And there are quite a few reasons for this.

 

On your storage, going with NVMe will help as well, but only so much. If you want an idea of what I'm talking about, copy a ton of small files (like only a few kilobytes each) from one location on your SSD to another and watch the transfer speed. That is what a solution build is doing: opening and reading a ton of small files, and creating a bunch more small files. There is a little bit of a penalty incurred every time a file is opened and closed. The more often this happens, the more often that penalty is incurred, meaning it's incurred more often with smaller files. It's why transferring a bunch of small files from one location to another takes longer than a few large files.

 

Adding more RAM will help, especially since you're running Docker containers on Windows, I presume. (Why?) Which if those Docker containers aren't all that heavy, consider moving those into a Linux VM if they're Linux containers, or if they contain software that can run happily on Linux if those containers are for your projects. They'll require less resources there.

 

So, TL;DR:

 

  • Yes, the newer CPU will help, but it won't cut your build times in half. It will help spread the load on what you're running. Given all of what you're trying to do, I'd use that alone as the justification for the newer CPU.
  • Yes, more memory will be to your advantage here, especially given what you're trying to do, but it's unlikely to significantly help your build times as well.
  • Yes, NVMe will help, but it's unlikely to be the significant performance boost to your build times that you're hoping for. It will help in a lot of other ways, though, so don't be too focused on your build times.

You're likely to get more bang for buck right now by upgrading your storage to NVMe, but keep the SATA SSD as secondary storage. Upgrade memory next - go to 32GB before deciding to go to 64GB.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

I essentially have the same workflow as you, and frankly, more cores AND faster cores are going to help you out a lot.

 

I run a 10900X currently, formerly a 7800X and I saw a pretty decent uptick in performance for my workflow with the switch. I don't think you need to worry about your RAM unless you intend to start using VMs to do your work in like I do though. EDIT: For clarification, what I mean here is that I have different IDE installs and source repos within hyperV instances for my different client organizations.

 

If I didn't already have the X299 platform, I likely would have gone 3900X on an x570 board, and if it were my build I'd definitely migrate to an NVMe device. For the relatively low cost of a 500gb NVMe drive, I'd definitely want to save the few seconds per compile/launch. It seems small, but it adds up, and that might be a few minutes a day extra that I have to do something else. I've even gone so far as having 3 NVMe drives in my system, 1 for my OS and games, 1 for my visual studio install(among other IDEs) and source code, and 1 for my DBs and web server testing environments.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi,

 

I've been wondering about this as well. I'm a software architect and have been working with Visual Studio for about 10 years. Over the past few years, I've been working on building distributed systems which generally result in fairly large (60+ projects) Visual Studio solutions which (especially when using tools like Resharper) tend to result in a really slow experience. With the recent release of AMD's latest CPU's, I can't help but wondering if such CPU's would result in a substantial performance increase, allowing us to work faster and spend less time waiting. Unfortunately, I have not been able to find any benchmarks on this subject and I'm also not in a position to test my workflow on such a machine. Any insights into this subject would be greatly appreciated! My current (Lenovo P51) specs are:

 

Intel Xeon E3-1505M v6

32GB RAM

m.2 SSD (SAMSUNG MZVLB512HAJQ-000L7)

    

 

Thanks!

Edited by wouterroos
Link to comment
Share on other sites

Link to post
Share on other sites

@wouterroos Interestingly enough, if you watch some of the 3990X reviews that LTT, GamersNexus and Level1Techs did, you'll notice they had some trouble finding workloads to benchmark with, but one thing AMD themselves suggested was large code base compiles as a way of benchmarking so many cores.

 

If you take that thinking and go back down to a more reasonably priced CPU then it would make sense that a 3950X or a 10980XE would see a compilation performance gain over a 4, 6, or 8 core CPU.

 

I'm not sure if anyone has done extensive benchmarking for this specific use case on a range of CPUs. It would be hard to do so for most reviewers however, given that you would need access to a large repo of projects that you know before hand are difficult to compile. A hundred random git repos maybe? MS themselves do mention that the Visual Studio build process does utilize multi-core platforms. https://docs.microsoft.com/en-us/visualstudio/msbuild/using-multiple-processors-to-build-projects?view=vs-2019

 

It's nothing definitive, but I personally did notice a speed up in my VS workflow moving from 6 cores to 10.

Link to comment
Share on other sites

Link to post
Share on other sites

@wouterroos The problem you're going to run into is the projects and solutions you're trying to build. I mentioned above the dependencies. Visual Studio (and msbuild by extension) can build a project and solution in parallel up to the processor count of the machine. But how far VS and msbuild parallelize the build is going to depend on the individual project and the project dependency tree within the solution.

 

So you may not see a substantial improvement over existing build times. Visual Studio parallelizes the build by default, but msbuild does not. So on a build server, check that you have that enabled (for msbuild, it's a command line option). And in C++ projects, I know you have to turn ON "multiprocessor build" for a project as that isn't enabled by default. With one solution we have where I work, I enabled both on the VC++ projects and enabled parallel builds in msbuild and observed an immediate 40% improvement to our solution build time on the build agent. And if you aren't using Azure DevOps to set up a build agent, make sure whatever build server you have (e.g. Maven, etc.) is configured so projects and solutions are always built in parallel.

 

Any build time improvement in a parallel build, though, will see more improvement at the front of the build before the solution dependency tree really starts to kick in.

 

Will these processors result in a substantial improvement to your workflow? That really depends on what you're referring to. If you're talking build times, I've already said the improvement probably won't be all that impressive. But if you have several teams trying to build projects at the same time on a build server, it'll allow more builds to run simultaneously without throttling everything. First look to see whether there are ways to optimize your existing builds before upgrading the hardware.

 

And for individual developer machines, as I mentioned above I'm currently running a Haswell i7 processor in my laptop. Going with a new generation AMD or Intel I would not expect to result in substantial improvements to my workflow.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×