November 5, 2014

2 Requirements for Easily Surviving Vulnerabilities

Continuous Delivery
Version Control

keep calm and version everythingAnother day, another vulnerability! It seems like only yesterday that sysadmins around the globe were stressfully reacting to Heartbleed. It was interesting to witness that while many were quite anxious over this vulnerability, some groups were not. I’d be willing to bet that those who weren’t stressed-out about Heartbleed were similarly nonchalant about the latest bash vulnerability (aka Shellshock).

What separates those who are fighting fires from those who can calmly and quickly patch their systems? I believe it comes down to 2 crucial factors:

  1. A single source of truth for artifacts
  2. Automation

In order to quickly patch affected systems you need to know what version of a file is installed and where it has been installed. You also need the ability to deploy the fix at scale quickly, which is where automation comes into play.

If you are versioning everything, then your automation scripts and files that were deployed will all be under version management. As I’ve recently been to PuppetConf, I’ll use Puppet as an example.

If you have your Puppet scripts checked in and configured to pull files that are deployed directly from version control, you know what was deployed and how it was deployed. You also know when it was deployed.

So rolling out a fix is a simple matter. Check in a new version of files to be deployed, update and check in your Puppet scripts and you’re good to go. It’s just a matter of testing updated scripts and files and then deploying the fix to your servers. Easy, right? While this is over-simplified, it is accurate for systems that are already managed by software such as Puppet.

Compare this to a scenario in which environments are not versioned. Maybe your environments aren’t being managed. Even if they are, perhaps your scripts are configured to always pull the latest component from a repository or fileshare. How do you know what was deployed and when? How do you know what systems need to be updated?

For systems that aren’t managed in an automated fashion, there is the non-trivial task of discovering what is deployed where. Fortunately there are solutions available for that, but that’s outside the scope of this blog post. Let’s focus on managed systems and how a single source of truth can help avoid this sort of scenario in the future.

There are many reasons why the bits necessary to reliably reproduce and update an environment aren’t in place. Two of the more common scenarios are:

  1. You’re too busy fighting fires to spend time on this right now.
  2. You need to store large binary files, but your version control system doesn’t handle large binaries well so you can’t use it in your situation.

The first reason is a tough one. Of course it’s less stressful to invest in improvements before they’re critical, but who’s got the time? Then again, who wants to explain to executive staff why Heartbleed took weeks to patch when others did it in hours and on more servers than you have? Careful, this could be an “RGE.” That is, a résumé generating event! The good news is that it’s now easier than ever to get started on infrastructure automation.

Storing large binary files in version control is indeed a real problem for many organizations. I struggled with it myself earlier in my career. Ideally, I would have liked everything in version control, but due to limitations of the tools I was using it just wasn’t feasible, and performance would have been a very real problem.

Better to keep calm and version on. Perforce has a long track record of handling files of any size and type, and offers apps for not only technical staff but business users, too. When it comes to versioning everything and automating your deployments, there’s no time like the present—before the next big vulnerability.