February 7, 2017

DevOps Digest 309: Automatic Integration


Last time, we got code coverage analysis running on top of the unit tests from Issue 307. We could go on in this vein for quite a while because there are so many things that may usefully be added to debug builds. Tips revealed from static analysis, complexity and refactoring metrics, and so forth can all catch issues and provide immediate feedback to developers.

But we’re going to set those aside because we’ve got too many other fish to fry. Producing debug builds using Continuous Integration (CI) tools is still just the first step in our DevOps pipeline. If we’re going to deliver results in an automated fashion, we’re also going to need builds for testing and release candidates. Thus, today, we’ll talk about integrating code automatically and producing those builds.

Getting Reacquainted with Streams

Back in Issue 303, we created a “Development” stream beneath our “Main” stream to give developers their own place to work, isolated from other teams. In a real production environment, we’d probably have a more elaborate setup. For example, we’d have streams for different groups or teams between the “Main” stream and where daily development takes place, because those would provide a better place for quality assurance to verify units of work.

Key to trunk-based development (TBD) is the notion that stability always rises as you head up the hierarchy. Ideally, TBD lets us take a known-good release from “Main” at any moment. But working from our sample application, we’ll simply deliver our “Development” code to “Main and leave it at that. Toward that end, let’s look at the stream graph:

Image Blog DevOps Digest 309 Body 1

The green arrow pointing upward from the “Development” stream indicates that our work there is waiting to be delivered back to the “Main” stream, from which we’ll take our release builds for testing. In a real production environment, integrate contributions regularly in both directions to make sure everything comes together smoothly.

Integrating Your Code

Perforce Helix provides several options for carrying out this everyday task. Users working in the Helix Visual Client (P4V) can simply check their dashboard and click “Copy to Main”. The dashboard is an oft-overlooked yet surprisingly handy tool, which is why I keep plugging it. Alternately, right click the “Main” stream and choose the option to integrate to it. Both will walk you through the process, creating a workspace if necessary.

Image Blog DevOps Digest 309 Body 2

Integrating Your Code Automatically

But of course, any method in P4V requires user interaction, while we’re after automation. So we’ll work from the command line. The most traditional method to accomplish that would involve using a workspace for the “Development” stream to merge down via a set of commands, then another workspace for the “Main” stream to copy up via another set of commands.

Let’s do something a little different since it offers an opportunity to illustrate how the DVCS features in Helix can give DevOps professionals a boost. We’re going to leverage DVCS features to clone the two streams we need to work with, then walk through the process of integrating work in both directions by switching streams in place.

Helix’ DVCS features are the industry’s most advanced in several respects, not least of which is the precision they bring to getting only the files you need to do your job. Other DVCS systems give you the “whole enchilada,” forcing you to clone entire repositories, but Helix lets you select only what you need for greater performance, efficiency, and a significantly smaller local footprint.

And again, Helix gives you choices, allowing you to clone by either file specification or what we call a remote specification. The former is great if you just need to grab a single folder and everything in it. We don’t want a single folder; what we want is to include two separate streams from the server when cloning a local repository.

To achieve that, we’re going to create a new remote spec to select the content we want. Execute the following command to create a new remote spec on your server:

p4 remote DevPromoteToMain

That invokes our editor with a sample remote-spec template already in place. Let’s customize it to include both the “Development” and “Main” streams. The following screenshot shows how to use Notepad to fill out the details.

Image Blog DevOps Digest 309 Body 3

Notice the lines at the bottom, under “DepotMap”. Each line will map content into our local repository. Here, we’ve specified that we want to have two local streams, “development” and “main”, corresponding to our “Development” and “Main” streams on the server. With that new remote spec, we use the following command in an empty folder to limit the clone to the content we specified:

p4 clone -p DevOps:1666 -r DevPromoteToMain

With our local repository, the DVCS features in Helix make it simple to merge down and copy up. Because we’ve just fetched the source afresh, there’s no need to sync a workspace. We need only switch to our development stream and integrate any changes from main. To make the switch in place, we issue the following:

p4 switch development

That activates our local “development” stream, updating the contents of our local repo folder as needed. Executing the following command will then merge down any content from “main”:

p4 merge --from main

As indicated by the downward-facing arrow from “Main” in the earlier illustration, there is no work to be merged. So our next task is to switch back to our main stream and copy up changes from development. That requires the following two commands:

p4 switch main

p4 merge --from development

Unlike before, this time we’ll get all the content from the last seven changes we submitted in prior articles. The screenshow shows messages to expect concerning how the various content needs to be resolved.

Image Blog DevOps Digest 309 Body 4

Conflicts are common when you merge content from one stream to another. The Helix approach to reconciling those conflicts is its resolve process. If you follow best practices, such as separating developer tasks conceptually in the organization of your code and merging contributors’ work frequently, you can usually resolve conflicts automatically. We’ll attempt that with the following command:

p4 resolve -as

The “-as” argument tells the resolve engine to do its thing automatically whenever it’s “safe” to do so. (See more on merge resolution.) In this case, it tells us everything has been resolved successfully and ultimately leaves us with changes to commit to our main stream. We can review the actual set of changes to be submitted with a simple “p4 opened” command:

Image Blog DevOps Digest 309 Body 5

The following two commands allow us to submit that work and push back to the server:

p4 submit -d “Merged development to main.”

p4 push

When the results have finished scrolling off the screen, we can go back to P4V to see how our streams graph looks after delivering all outstanding work. Both arrows are now gray, indicating that our task is complete.

Image Blog DevOps Digest 309 Body 6

The DVCS approach simplifies many tasks, particularly because Helix offers unparalleled selectivity in cloning. It’s great for any DevOps tasks requiring a fresh copy of source, and creating a shadow clone, or cloning only the head, would save even more time, bandwidth, and disk space.


Release Management Best Practices in Jenkins

Now that we’ve seen how easy it is to integrate streams, we’ll close with a few recommendations for Jenkins. Unfortunately, there is no single best practice to recommend. Doing debug builds is pretty straightforward, but different organizations approach release builds in a variety of ways.

Some places kick off a release build once a week on Sunday, because when QA shows up Monday morning it will keep them occupied for the rest of the week. Other places run a release build every night, or even after every successful debug build. It’s ultimately up to you.

Similarly, organizing release candidates and maintaining them varies from one shop to another. If you want to keep every release candidate build separate, make sure your Jenkins project creates a new folder for each release build and clones fresh source therein every time. If you prefer to have a single release candidate on hand, then re-use a single folder.

We always recommend that you start with a clean folder for each release candidate and clone fresh source. There are too many ways that having old artifacts lying around can mask dependency issues.

By now you should be sufficiently familiar with Jenkins and its operations to encode the merge-down-copy-up mechanism we’ve used to integrate source code in an automated fashion. In rare cases when resolve errors crop up, your Jenkins project will fail. That’s when you involve the development team to figure it out.

Whatever your choices, you’ll be ready to easily leverage the existing framework we have in place to produce a release build with just one more tidbit of information. The “msbuild.configuration” property defined by our build script is set to “Debug” by default, but it may easily be overridden on the command line. For example, in place of the usual command line to kick off a build, use the following instead:

nant -buildfile:Build\DevOpsSample.build -D:msbuild.configuration=Release

That tells NAnt to override the normal value of “msbuild.configuration” and use “Release” instead, which the MSBuild system will recognize, subsequently producing a release build. One change in your new Jenkins project, and you’ll have a release build in no time.

That’s it for integrating streams automatically and producing release builds. In Issue 310, we’ll look at a few final common tasks associated with release candidates to conclude the Continuous Integration chapter of our DevOps journey.


You Ask, We Answer

As previously mentioned, this is your roadmap to creating a successful DevOps pipeline. Don’t understand something? Just ask. Need to dive a little deeper? Send an email to [email protected] with your questions. Then, stay tuned for a live Q&A webinar at the end of this series.

Get DevOps Digest Sent to Your Inbox

You don’t need to remember to check back with us each week. Instead, get the digest delivered directly to your inbox. Subscribe to our 25-week DevOps Digest and we’ll get you where you need to go, one email at a time.

See Perforce Helix in Action!

Join us for a live demo every other Tuesday and see the best of Perforce Helix in 20 minutes. Save your spot!