January 16, 2017

DevOps Digest 308: Code Coverage Analysis

DevOps Digest

DevOps Hero Image

In Issue 307 we got basic, extensible unit tests working, making it easy to add new tests and unit-test assemblies to the project over time. Our use of build scripting made it possible to add that functionality and leverage it in Jenkins with very little effort, importing test results and associating them with each build.

Seeing unit test reports is important and seeing the report with each build gives you confidence things work the way you intend. What they don’t tell you, however, is how much of your application is actually being exercised by those unit tests. A suite of passing unit tests doesn’t mean much if they cover only a fraction of your code. So let’s take things a step further with code coverage analysis.

Code Coverage Tools

Code coverage analysis works by “watching” the execution of software and matching the observed details against a database of symbols to report on which lines of code were touched. That’s an oversimplification, but the higlights for our purposes are that we’ll need to determine (1) what to execute and (2) which tools to leverage for analysis and reporting on the results.

As usual there are multiple alternatives, but we’ll choose two of the more popular for the .NET platform: OpenCover and ReportGenerator. The former does the “watching” and records observations, while the latter chews through that data and spits out helpful reports in various formats.

Installing OpenCover seems simple enough, insofar as its latest version may be downloaded easily from the project release page. But if you choose the installer, you’ll find it places the files into a user-specific folder: “%LOCALAPPDATA%\Apps\OpenCover” or “C:\Users\[userid]\AppData\Local\Apps\OpenCover” by default. So you’ll either need to (1) use that installer and move the files, or (2) download the archive and unzip them manually. Either way, putting them in a more accessible location (e.g., “C:\Program Files (x86)\OpenCover”) is important.

The ReportGenerator tool has a similar project release page, so you need only download its archive and unzip it to an appropriate location. Again, for sake of consistency, choosing a broadly accessible location (e.g., “C:\Program Files (x86)\ReportGenerator”) is what’s important.

Deciding What to Execute

The best kind of code coverage analysis is one that is performed against the entire application, but that’s beyond the scope of our current focus on continuous integration. That kind of holistic coverage analysis is really part of “all up” or “integration” testing, which is usually far more involved and can burn a lot of time for significant applications.

Instead, we can reap significant value for our debug builds from what little investment we’ve already made in unit testing. Insofar as our unit tests exercise the components that comprise an application—yet another incentive for component-based development (CBD)—they also provide some tasty, low-hanging, code-coverage fruit just waiting to be picked. So let’s add some new build targets to our existing build script to conduct code coverage analysis via our unit tests, and generate a nice HTML-formatted report from the results.

Once finished, we can pull those results into Jenkins for another build-quality-reporting win. Speaking generally, any HTML output that’s meaningful to your build process can be easily imported by Jenkins to offer useful reports.

Setting Up for Code Analysis

But first, let’s update our build script to include some new properties and targets. Here are the new properties we’ll be adding to the build script for code coverage analysis:   

 <property name="opencover.console" value="C:/Program Files (x86)/OpenCover/OpenCover.Console.exe" />

    <property name="coverage.results.path" value="../CoverageResults" />

    <property name="report.generator.console" value="C:/Program Files (x86)/ReportGenerator/ReportGenerator.exe" />

If you understood the unit-testing properties we added last time, these should be pretty self-explanatory. The “opencover.console” property “points” to the OpenCover executable. The “coverage.results.path” property specifies where the code coverage data and reports should be stored. And finally, the “report.generator.console” property “points” to the ReportGenerator tool.

Generating Reports

Turning to targets, let’s start with one that will clean the code coverage results. Like unit testing, the following deletes the entire folder. Though it doesn’t appear in this article, I’ve also added the new target as a dependency to the “Clean” target, so we can be assured that a top-level clean will get rid of all extraneous stuff.

    <target name="CleanCoverageResults" description="Cleans the code overage test results">

        <delete dir="${coverage.results.path}" />

    </target>

The next target is slightly more complicated, but should be familiar if you understand how unit testing works.

    <target name="RunCoverageTests" description="Executes OpenCover on unit tests for code coverage results" depends="CleanCoverageResults">

        <mkdir dir="${test.results.path}" />

        <mkdir dir="${coverage.results.path}" />

        <foreach item="Line" in="${test.assemblies.filename}" property="assemblytext">

            <exec program="${opencover.console}">

                <arg value='-register:user' />

                <arg value='-target:"${nunit.console}"' />

                <arg value='-targetargs:"--result:${test.results.path}/${path::get-file-name(assemblytext)}.xml;format=nunit2 ${assemblytext}"' />

                <arg value='-output:"${coverage.results.path}/${path::get-file-name(assemblytext)}.xml"' />

            </exec>

        </foreach>

    </target>

Note well that we’re leveraging that same list of assemblies used by our unit testing. This approach guarantees any new unit-test assembly added to that list will automatically be picked up by our code coverage tests, which helps keep things simple with only one place to look for that information.

Our target iterates over that file, executing the unit tests and producing coverage output for each assembly. However, the command-line arguments are re-used from the unit-testing target, creating duplicated data. Although it’s possible to solve that problem with another property, the syntax is ugly enough to avoid here.

With a target to produce code coverage test results, we need only one more to produce a report. We could build this functionality into the coverage-testing target, but the additional freedom of an independent target can be helpful. If you keep data collection separate from report generation, you’re free to implement as many code coverage tests as you like; the report-generation step will simply amalgamate all the results into one comprehensive report.

    <target name="GenerateCoverageReport" description="Runs ReportGenerator to create an HTML report">

        <exec program="${report.generator.console}" >

            <arg value='${coverage.results.path}/*.xml' />

            <arg value='${coverage.results.path}' />

            <arg value='Html' />

        </exec>

    </target>

Generating the coverage report requires nothing more than invocating the ReportGenerator tool. Pass it a couple of arguments for the input files and output folder, along with an argument to specify HTML-formatted output, and you’re set. We could put the actual report in a different folder or delete the original XML input files, but it’s easier to archive that information in Jenkins if it’s in a single folder. Having made these changes to our build script, invoke the whole process, start to finish, with the following command:

nant -buildfile:Build\DevOpsSample.build Clean Build RunCoverageTests GenerateCoverageReport

Again, this makes it easy for any new developer on the project to leverage our work from the command line as desired. Plenty of developers know how to run unit tests, but not nearly as many know how to generate code coverage reports and review the results. Presumably, this illustrates how embedding such functionality in build scripts “packages it up” for all to use easily.

Importing Analysis Results into Jenkins

Now that we’ve got code coverage analysis and reporting in our build script, let’s update Jenkins to execute it and import the results. Because we already have the unit tests running, we need only change a single line of our pipeline script to run the code coverage tests and generate the report. Change “RunUnitTests” to “RunCoverageTests” and “GenerateCoverageReport” instead.

To import the code coverage report, we’ll use the pipeline script generator. To invoke it, edit the project configuration and click “Pipeline Syntax”. On the resulting page, choose the “publishHTML” step option and fill in the desired values. Import the contents of the folder as a “Unit Test Coverage” report and  specify the “index.htm” file (the default produced by GenerateReport when writing to HTML) as the root document.

Thumbnail

Clicking the “Generate Pipeline Script” button produces the highlighted text at the bottom, which we then copy and paste into our project’s pipeline script. The final version should resemble the following:

Thumbnail

All that remains is to save those changes and kick off another manual build to enjoy the fruits of your labors. When the build completes, click the project name on the main Jenkins dashboard. By now you probably should have noticed that you have a “Test Result Trend” graph on your project page, which Jenkins updates from one run to the next. That’s handy for noticing regression issues and generally enjoying the upward trend as more unit tests are added. New to the project page, however, is a “Unit Test Coverage” link on the left. Clicking that provides the following report:

Thumbnail

That top-level page provides header information and a list of the assemblies involved in the testing. You may click each item in the list for more details.

Thumbnail

The test code at the very bottom of the screenshot should be familiar from the previous article. The lines indicated in green were exercised with the hit count shown to the left. In this case, line twelve was only executed once. As you can see, a little work yields impressive, useful results.

This kind of code coverage analysis, while incomplete, provides developers with solid guidance for where more unit tests are needed. It also pushes developers toward developing and testing in isolation, which is almost always a win for simplicity and quality.

We're taking a short hiatus do run some QA testing of our own, so we'll see you back here on February 7 for a lesson in creating manually-triggered release builds. 

You Ask, We Answer

As previously mentioned, this is your roadmap to creating a successful DevOps pipeline. Don’t understand something? Just ask. Need to dive a little deeper? Send an email to info@perforce.com with your questions. Then, stay tuned for a live Q&A webinar at the end of this series.

Get DevOps Digest Sent to Your Inbox

You don’t need to remember to check back with us each week. Instead, get the digest delivered directly to your inbox. Subscribe to our 25-week DevOps Digest and we’ll get you where you need to go, one email at a time.

See Perforce Helix in Action!

Join us for a live demo every other Tuesday and see the best of Perforce Helix in 20 minutes. Save your spot!