Running static analysis checks can help keep us on track with our standards, identify unused code, identify similar code blocks, and much more. In a manual process, running these tools can be time consuming, costing time to wait for the run complete, more time if we don’t abandon them after the first run and have to maintain a schedule, and even more time if someone has to keep a spreadsheet somewhere to compare the results from run to run. Add an automated build process, and we can net the same level of information and trending for a modest setup cost.

Up until now, all of the additions to my Continuous Delivery project have been focused on building, testing, and deploying the code. In this post we’ll discuss the potential benefits and costs of adding static analysis to the pipeline and then walk through the details of adding several of those checks into the sample project.

Note: The components outlined in this post have actually been part of the continuous delivery pipeline for a few months, driving changes and cleanup commits from December 21st forward as I continued to build out some of the later components.

Costs

Even with automation, there is still a cost to static analysis. These tools take time and resources to run. The larger our codebase gets, the more time and memory the tools will require to analyze the code and generate results. We have two dials we can use to help reduce the impact on our end-to-end delivery time: tool selection and process location.

The importance and usefulness of the data should drive which tools we use and where we configure them to run. The goal is to limit the impact on delivery time while selecting the right tools for our projects and development style. If we intend to use the data for standards enforcement or as process constraints then it is going to be more effective as part of the pipeline, but if it is for informational purposes then a parallel step or separate, scheduled build may be more appropriate. Altering the location can be used to run tasks as part of the primary build chain*, in parallel to the main build chain, or even in a separate build, each with higher or lower impact to that end-to-end delivery time.

* Note: If you map out your build process along with the amount of time it takes for each step and the amount of time that is spent waiting in between steps, you can total the numbers to calculate a total delivery time for defect-free commits. This is helpful when evaluating the costs of adding an additional sequential step, parallel step, or none at all.

Benefits

Static analysis tools provide visibility into the style and structure of our code. There are tools to help capture TODO comments we’ve left in the code, tools to analyze the level of copy and paste, code coverage, even tools that offer so many measures they incorporate query engines (like NDepend). The ability to run them regularly without the ongoing cost is great, but by running them as part of our builds we also gain the ability to apply the results to the build quality, potentially failing the build if the code exhibits too much of a bad behavior.

When they’re used correctly, the ability to affect the build quality brings the greatest benefit. Failing the build when the code doesn’t meet our quality forces us to address the issues immediately. The changes we committed are still fresh in our minds, making this is the fastest (and cheapest) point in time we could clean them up.

To support this, we’re forced to define our standards for code quality. When a build fails, we are then making a correction to meet our guaranteed standards, which is easier for our company or project manager to accept than if we had taken it upon ourselves to clean up some code in the middle of a crunch.

Lastly, it helps prevent us from putting those fixes off and building up a mountain of technical debt that tends to bog down execution on so many projects. We are keeping the code more consistent and doing so at the cheapest time possible.

Choosing Some Tools

So there are some costs and some benefits, but what does it look like when these tools are wired in and running as part of the build? Let’s add them and find out.

For the purposes of the sample Continuous Delivery project, I am going to be adding:

  • Compiler Warnings – Scanning for compiler warnings in the output is the cheap and easy to clean up
  • Open Tasks – Scanning for HACK, TODO, and REFACTOR tags in the source code
  • Duplicate Code – Scanning for duplicate code blocks that could be refactored
  • Rule Based Scanning – Scanning for situations that violate a set of style rules
  • Code Coverage – Evaluate the test coverage levels on our code to identify areas with low coverage

As we go through each one, I’ll identify the external tool and/or Jenkins plugins.

Several of the plugins below produce data that can be captured and displayed on the job summary screen as trends. They require the Analysis Collector Plugin as a prerequisite. There are no setup steps to this core plugin, so after installing it we can continue on to the real work below.

Compiler Warnings

Compiler warnings can be captured easily using the Warnings Plugin. After adding it to the Jenkins server, we can open the CI job and add the checks in the “post-build Actions” section:


Analysis – Compiler Warnings

The plugin has the ability to scan the build output and files in our workspace, with parsers for a number of different tools. In this case I’ve chosen to capture the MSBuild output, but I could just as easily capture output from something like JSLint (speaking of future projects). A trend of the warning count is added to the job summary screen and a summary of the warnings is displayed on each individual run:


Analysis – Compiler Warnings Summary

And drilling in we can see the details:


Analysis – Compiler Warnings Details

In this case, we’re actually looking at the warnings from 49 builds ago. Since the results are incorporated into the build summary, I can access past versions for as long as I keep the job history.

Open Tasks

Open tasks refer to those little TODO and HACK tags we litter throughout our code, always with the intent of someday coming back and doing something with them. Using the Task Scanner Plugin, the build can scan the code and add these comments to the trends and build details.

The plugin adds a section to the “post-build Actions” section, which we can then configure to fit our specific needs:


Analysis – Open Tasks

In the job, we can specify which tags to capture from the code and the filename pattern to use for code files. There is also an advanced section where we can add thresholds. For instance, this is where I could define that 20 or more cases of the high priority tag should cause the build to fail until we get our HACK’s back under control.

The output for this plugin is similar to the Compiler Warnings, with a trend on the main project page, a summary in the individual job:


Analysis – Open Tasks Summary

And a details page we can drill into:


Analysis – Open Tasks Details

The scan pulls out not just the presence of these tags, but also the content and location of the messages. This makes it easy to quickly scan them and pick one to try and knock off the list.

Duplicate Code

Code duplication is a code smell that adds a number of risks to your codebase, violating the DRY (Don’t Repeat Yourself) principle.

There are a number of tools out there to detect duplication in code, in this case I chose to use Simian and import and display the results via the Violations Plugin, which is also used below for analyzing the code against common problems and style rules.

First, we need to run simian against our codebase to produce a report. I’ve added a windows batch command as part of the build steps to run this command, placing it after the unit tests but before the test deployment.


Analysis – Simian Execution

I’ve told Simian to run against all *.cs files in my project directory, excluding the sample data file that is used to generate a sample Entity Framework model. At the tail end I force it to return an exit code of 0 so the job will continue to run the later steps and post-build analysis. Simian will helpfully return a failure exit code when it finds violations, but that’s not useful for this case.

In the “post-build Action” section, I’ve provided the path to the output file Simian creates:


Analysis – Simian – Post Build

We also have the ability to define thresholds for good, stormy, and unstable, based on the number of duplicate blocks found.

In the summary screen for the job and each individual run screen, the DRY plugin will add a trend that displays the number of duplicate blocks found over time.


Analysis – Simian – Summary Trend

As we drill into the details, we get a list of the files where duplicates occur, and can then drill down into those to see the actual code that was found.


Analysis – Simian – Detail Trend

In this case it has found an interface and an implementation of that interface that have a block of 11 lines of code in common.

Code Standards

When it comes to code standards, the two main engines I had to choose from were Gendarme and FxCop. The Violations Plugin can handle either, so I opted for Gendarme (I believe it was faster for my specific scenario).

Like Simian above, I need to first run the executable against the codebase, then consume the generated report post-build to feed the trends and details.


Analysis – Gendarme – Batch Command

I’ve added the command for Gendarme after the unit tests and before the Simian command above, pointing it at the dll for the project and specifying I want XML output. As with Simian, I’ve forced the batch to return an exit code of 0 so the run will continue through to process the later steps.

also like simian, we specify the location of the XML file in the Violations section of the “post-Build Actions”.


Analysis – Gendarme – Post Build

The Violations plugin overlays the Gendarme results on the chart we saw above:


Analysis – Gendarme – Summary Trend

And the detail results are also very similar:


Analysis – Gendarme – Detail Trend

However once we drill in, we not only get the code view but also can hover over the identified warning locations for details on the rule the gendarme is reporting on.


Analysis – Gendarme – Code Details

The early downward trend above corresponds to the commits starting on December 21st, when I started cleaning up some of the worst offenders.

Code Coverage

Code coverage identifies the level of test coverage in our codebase by monitoring the tests as they run against our assemblies. For this step I used OpenCover combined with ReportGenerator to convert the output into an HTML report, and the HTML Publisher Plugin to incorporate the generated reports into the job results.

This one took more steps for setup:

  • Download or clone the OpenCover source from Github
  • Build using the included Build.bat script
  • Copy the compiled binary to my build server
  • Copy the ReportGenerator folder to server
  • Replace existing MS Test batch command with call to run it from OpenCover and process results with ReportGenerator (below)
mkdir "%WORKSPACE%testresults"
"C:AdditionalScriptsOpenCoverOpenCover.Console.exe" -target:"C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEmstest.exe"  -targetargs:"/resultsfile:"%WORKSPACE%testresultsMyTests.Results.xml" /testcontainer:"%WORKSPACE%MvcMusicStoreTestsbinReleaseMvcMusicStoreTests.dll" /nologo"  -mergebyhash -filter:"+[MvcM*]*" -output:"%WORKSPACE%testresultsopencovertests.xml"
"C:AdditionalScriptsReportGeneratorReportGenerator.exe" "%WORKSPACE%testresultsopencovertests.xml" "%WORKSPACE%testresultshtml" HtmlSummary
"C:AdditionalScriptsReportGeneratorReportGenerator.exe" "%WORKSPACE%testresultsopencovertests.xml" "%WORKSPACE%testresultshtml" Html

This command (1) creates the test results folder, (2) runs OpenCover with arguments to locate the MSTest executable, flags to use with MSTest, a filter to limit execution to the main assemblies, and an output filename. The last lines (3) run the ReportGenerator executable against the results, producing the HtmlSummary report and (4) detailed HTML reports for each file.

Checking the “Publish HTML reports” option in “post-build Actions”, we can specify the folder, index page, and title to use for the generated reports.


Analysis – Coverage – PostBuild

This adds a sidebar link to the job summary, which links to the summary we generated and the detail pages associated with it.


Analysis – Coverage Link

Analysis – Coverage Summary Report

Each of the links has a details report behind it that includes code statistics, like cyclomatic complexity, and a copy of the code with markers to indicate covered vs uncovered lines.

Summary

The addition of these tools to my project was done as part of the CI build, placing them sequentially into the process. This works with a smaller project but needs to be evaluated more carefully in a larger project.

The results from these tests have helped me clean my code up, identify gaps in the test coverage when I accidentally created them, and given me a good amount of historical knowledge about the quality of the code itself. The setup time was not that expensive and I can continue to take advantage of the benefits in every future build I do.