Jump to content

Development build machine migration from RAM-disk to RAIDed SSDs?

Hello everyone.

In our current set-up at work we use a 16gb RAM-disk to run our C/C++ compiler builds on in order to get them through as fast as possible. Currently it takes about 1 hr.

Then I was reminded (iirc) how Linus has a RAIDed SSD and thought, why not get some SSDs, RAID them up and use them to build on? Even the smaller ones, 32 or 64gb, would be more then we have now.

As we are running them on a server system the RAM slots are limited and expensive. Naturally we need the RAM-Disk size + normal amount for operations. A 64gb SSD runs me the same price as a 4gb ECC.

 

Any ideas or experience?

Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

Compilation seems like a task that would be much more constrained by CPU performance than storage performance.

Anyway, from my experience 1) setting up software raid through mdadm or webmin is easy as pie and 2) servers tend to not have very many SATA ports and power connectors and SAS SSDs are A LOT more expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

Umm. I got problems with this. DDR3-1333 can do around 10GB/s. You'd need like 20 Sata SSDs in RAID0 that theoretically scales perfectly to match the speed of the RAMdrive. How much speed are you willing to lose because your current solution is on paper like 20 times as fast as your plan? Or is there something funky going on with your RAMDrive so it doesn't perform that well?

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the replies!

 

We are running on a slightly older dual head Xeon server. It is also a dedicated slave build machine that is controlled by a central Jenkins server. All it does is build again and again.

 

Full disclosure: I forgot to mention we not only build the C/C++ code, we also build the linux embedded on it. But I doubt it makes that much difference.

 

Our initial move to RAM-Disk was to mitigate the bottleneck of file handles. Since we check out from the repository creating file, compile & test a lot of code which creates and uses a lot of files, create the distribution with more files and finally creating the image ... you can see where we thought file access might be a problem.

 

Overall it is a question of what options do we have and what gives the best bang for the buck. I'd hate to just throw money at a problem to find out later it only made a marginal difference.

Going for RAIDed SDDs was just an idea we had as it might be easier to extend then the ECC RAM. I can throw another SATA controller in, but the RAM slots are limited.

Link to comment
Share on other sites

Link to post
Share on other sites

Compiling does not need a lot of bandwidth but fast resopnse time as you read / write a lot of smal files. A raid setup creates overhead and latency and is only good for large files.

Adding one NVMe SSD like the Samsung 950 Pro is way better than a raid and potentially even better than a RAM disk as it does not require CPU time.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×