February 20, 2015

DevTalk - The Road to Continuous Delivery: Driving Lessons

Continuous Delivery

At Perforce, we don’t just preach Continuous Delivery, we practice it. In our latest webinar, Laurette Cisneros, our Engineering Tools Manager, talked about the challenges of implementing Continuous Delivery with codebases of different ages and sizes for both SaaS and on-premise products.

She shared lessons learned at the start of our journey to automation, including how we convinced the skeptics and determined the right route and the hurdles we encountered along the way. She gave tips for getting started in your organization and answered a number of great questions from our online audience (see below).

If you missed it, watch the on-demand version HERE.



We received many excellent questions during the live broadcast and Laurette has taken the time to answer a few of them.

Q: I'm interested on your perspective on moving "configuration" and "binaries" into source control. Not a simple process.

A: It is all about having a plan; knowing where you are today, where you want to be in the future (what does that look like) and putting a plan in place to get there. When we were setting up our build framework, we went down a few "box canyons" in our design until we hit on the idea that we could turn our "build-build" scripts into a build library. Then we make calls from the framework into the build libraries (we call it the Perforce Build System). These are all versioned. It's harder to version the code that lives inside the framework pieces, but we devised a way to dump that code out and version it every 5 minutes as well (and in the case of Jenkins, it involves using our git-fusion product, what I call "drinking our own champagne").

For binaries, we have two Perforce servers we use for builds. The first is our build server where all the Continuous Integration builds are initially stored. Some of the artifacts that get checked in are large OVA builds. We have clean up algorithms in place for all the build artifacts on the build server, such as only keeping the last 5 OVAs in a non-release line for example. We also have our product server where we keep everything. We have monitoring in place to ensure we keep an eye on the space (and allocate more when needed).

So having a plan is essential.

Q: For Continuous Delivery, we require the test-suite to be fully automated. Are the Test cases automated for every new story being implemented, Test Driven Dev? What percentage of tests are automated? How is test time reduced?

A: As mentioned in the presentation, we are at about 75% test coverage for regression tests, with more tests being written. Along with that, as developers make changes, such as for the Perforce server (the Versioning Engine), each change gets checked in with an update to or a new unit test. These test suites are triggered automatically when a new build has completed (which got triggered by the changes). The QA teams own their test suites and their automation and they use metrics to monitor and improve the test times.

Q: In a CD environment, how do you convince developers to fix a broken build (or any of its related automated tests) with the same priority/urgency as their other assignments?

A: This starts to happen naturally. As I talked about in the webinar, when we had the nightly builds, there was a 20-24 hour turn around for the builds (and it was only a subset of the products on a subset of the platforms). Once the Continuous Integration builds were running, there was now a much faster turn around for developers and QA to see the effect of the changes that went in.  

In reference to the chart that showed how the build times for the P4V product suite had gone from 20 hours down to 3 hours when we first set up the builds, we knew it was effective because then the developers starting asking if we could make the builds faster (which we kept evaluating and made them 60% faster after that). That told us that the developers were counting on these builds. As they come to count on the builds, if a change breaks the build they will also be wanting to get it fixed. Peer pressure starts to work.

There are also notifications that go out when a build breaks. The builds are triggered every couple of minutes if there are changes to the defined source areas for that build. There could be one or more changes that are in the build that was triggered. The notifications will go out to everyone who had a change that went into that broken build.  If the build stays broken, notifications continue to be sent out. If a developer needs that build and it stays broken, they will know who contributed a change and will work to figure out what change it was and how/who to fix it.