How to Use Docker Containers to Demo Customizable Products
In this age of mobile and web apps, the most common advice on demos is, “show the product a minute into the conversation, and demo the whole thing in under five minutes.” If the product is simple, the primary challenges are understanding the audience, confirming they are a member of the target market, and delivering a concise demo that doesn’t bore with unnecessary details.
But what if the product solves numerous problems and can be infinitely customized? What do you do then? Using Docker Containers can play a critical role in solving for a demo of this type of product.
Before the Demo
Typically, before we demo, we try to gather as much information as possible about our prospects and their challenges. Our best prospects talk to us because they’re dissatisfied with at least one aspect of their current version management solutions.
Because of our experience in software development, we can often predict what their issues are — but we need to hear from them exactly why they feel this way. We use a discovery call before the demo as a scouting mission for the account rep. It’s a chance for the rep to ask questions that are meant to:
- Drill down into the problems the prospect wants to solve
- Determine how important the problems are to the prospect or customer
- Find out what other roles at the enterprise (in addition to our contact) care about the problems
- Test their motivation to change
- Uncover and resolve concerns they may have with the company or product
- Begin to build a technical checklist
Our customers represent numerous technology-based industries that have varying needs, such as embedded systems and semiconductors, game development, financial services, medical devices, and more. And there are multiple roles in the large organizations we typically serve who each have different expectations for outcomes. We want to collect information about what the customer is doing now, what they want to do, and their desired state.
Instead of delivering a demo that runs through the features in a generic way, we can tell a story that demonstrates how we solve their problem in the context of their business.
Determining if Perforce is the Right Fit
“We know we have the best product. It’s world class. We’re a global leader – the most robust, scalable, customizable, standards-based, and truly comprehensive solution in the market!”
Just kidding, we would never say this. Quite the contrary. Our prospects are highly technical, and we need them to decide if Perforce is a good fit for their business. We know we have great products, but they may not fit every situation. So discovery helps us determine if Perforce is a good fit for their business or use case. A tailored demonstration that addresses their pains, their workflow, or their business requirements will go a long way to satisfying the customer’s curiosity.
If there’s a fit between our product and the customer’s problem, then we want to show that through a well-defined demo. Containerizing our demonstration environment was key to making it easier for us to deliver on this promise. It was quite a journey to get there.
The Technical, Cookie Cutter Demos
First, let’s step back to 2006, and think about how we used to set up and deliver demos. Our “Eval/Demo” group, which is what the Solution Engineering (SE) team was called back then, built a demo environment on a Windows box that ran the Perforce server with a demo database, which was pretty much the Perforce Sample depot with extensions.
A single, powerful PC ran both the Perforce server and the client software. It was simple and effective. We didn’t need much then because our needs and our customers’ needs were simpler then.
We had a dedicated room for demos, which we called the Eval Demo room. For most customers, we would run through a standard demo workflow, which included our command line, visual client, and server functionality. And because our depot was fairly small, we could show our functionality on a workstation class machine.
The Eval Demo team maintained several sample Perforce depots holding source code for a variety of languages and application types, so we could help prospects visualize their own development situation. Because the team was small, most customers and prospects would be booked into weekly live demos delivered via GoToMeeting. We also standardized the environment, so that new arrivals had a great place to start.
New customers moved to Perforce, and existing customers added new teams – sometimes with hundreds of thousands of seats. We also needed to give more SEs access to the demo environment.
At the same time, there was a surge in popularity of developers using Macs for their workstations. We moved the demo room setup to a Macintosh workstation. We ran the server and the P4V visual client on the Mac. And we continued delivering basically the same canned demo every week.
Since we own and maintain a world-class versioning system, we kept all the demo database and configuration assets in the same enterprise depot where we manage all of Perforce’s intellectual property. By doing this, the growing SE team could checkout demo assets and run demos from their own computers – computers that had the Perforce server and clients installed on them.
Around this time, the SEs were issued laptops as their standard workstations, and we could carry the demo environment to our customers all around the world. We often used with the same machines to deliver demos via webinars and web meetings.
Over the next few years, the complexity ramped up as we added many new features to Perforce. The most notable feature was our innovative Streams architecture, which made the sophisticated capabilities of our version control software more accessible to developers in fast-moving product development cycles. Just as importantly, interest in DevOps began to take off, and our mono repo capabilities became even more important to our large customers.
Our growth, and the growth of the product capabilities, made it more challenging to maintain consistent environments across a team of engineers doing demos on their individual laptops. Certainly, syncing from our Perforce server made this a little easier. But we needed a better way to manage our demo environments.
Fast forward to 2011. In a wonderful coincidence, Perforce IT implemented a VMware vSphere environment to support our application development needs. Since I had quite a bit of experience using virtualization at my previous job, I worked closely with the IT operations engineer running the VMware project, gained access to the cluster, and went to work building a new demo environment. The first thing I did was create a Linux VM that everyone could clone and copy for the demo. It basically duplicated the same demo environment we had deployed to the Macs.
Within about a year, VMs running Ubuntu had become the SE team’s standard environment, making life easier and better than it was with individual machines running installed software. And it was just in time, because Perforce continued to add new products – such as an added Git capability.
DevOps was becoming a seriously important business driver for our customers. This meant that Continuous Integration (CI) and Continuous Delivery (CD) were key initiatives for application and development IT teams. CI went on a hockey stick trajectory of adoption, and everyone wanted to explore CD capabilities.
To support the collaborative efforts among the members of an Agile or accelerated development environment we created Helix Swarm, our code review tool that enabled large customers to implement truly global scale CI. In this same timeframe, Jenkins experienced its own fast-rising trajectory, so we wanted to show that integration as well. These capabilities made our demos much more comprehensive.
So, in 2012, we’ve added these new capabilities and tools to the existing VM template. We tied our Swarm installation to Jenkins, and we built two new VMs to show off our new Git code hosting solution and Jira integration. Now there are at least three VMs. It was awesome, but it’s making our demo environments harder to maintain. Especially with the individual needs of the SE and disparate release cadence of our products. Not to mention that running three VMs in VMware desktop tools (Workstation and Fusion) were seriously taxing the resources on our laptops.
Showing the new capabilities was important to our business, but we now had people all over the world doing demos. And they were running the VMs on their own laptops, using VMware Fusion or Workstation. Cloning and downloading.
With these configurations, for a full demo, the engineer had to download two or three VMs that were >20GB each. The Helix Core (P4D) demo was upwards of 45GB. Some SEs started running the demos out of the vSphere cluster. This presented a bit of a challenge for SEs outside the U.S., in terms of international bandwidth and latency considerations, so they continued to download but had a difficult time keeping up with changes.
The Infinitely Customizable, Customer-Specific Demo
As mentioned earlier, to get the benefits of the extensive demo prep, the SEs wanted to build their own customized demos based on what they learned from the customer. We needed demos to address the specific pain points articulated by the customer, and it had to be easy for the SEs to do this. We wanted a more “set it and forget it” environment.
With all the new technology supporting the demos, we were closer to being able to effectively do this with much less effort than before. But even the VM environment became hard to maintain because of product updates, new products, and changes in the standardized demo. Plus, because we’re an international company with offices around the world, it was painful for remote SEs to download >70 GB of virtual machines.
I think somewhere around 2014 I started looking at Docker because of some customer requests for Docker support. In the meantime, one of the European SEs was experimenting with Vagrant. Containers seemed like a great way to combine resources on a single computer without having to concern ourselves with resource constraints and networking conflicts. My investigations led me to build a Docker container image with the Helix Core server (P4D) and put it in Docker Hub so customers could experiment with it. I did the same with Helix Swarm. But that only solved some basic issues with running multiple applications. Now if we could run all our demo applications on a single VM and deploy our demo data to it, that would be interesting.
It was interesting, but I had set aside my Docker investigations because we had what we needed in the three VMs, and we were managing it reasonably. But the problems of copying VMs across oceans were still a very nagging issue.
Back in the day, we would actually send a physical disk drive to London or Australia; it seemed easier, and sometimes, it was faster. So, we needed a better way.
When we released our first Git code hosting solution, we had reached a critical mass with resource and TCP/IP contention on our demo systems. It was just getting too hard to add anything new without adding a new VM. So I picked up my Docker implementation again in earnest. It was especially important, since we were just about to release a new depot type in the server (the Graph Depot) and a new connection mechanism for using Git directly with the Helix Versioning Engine (P4D), called Git Connector.
I started with the base system of P4D and Swarm. Then, I added containers for the Git code hosting server and the Git Connector. I decided to implement name service on the Docker host using DNSmasq and created some automation around managing the hosts table for DNSmasq. Now, any new container that was deployed would be given an IP address that was reachable by any other container and could be forwarded by the host.
Then, I used our previous VM management scripts to deploy our demonstration data to the server and configure all of the other applications. From there, I deployed Jira to a container and then set about creating some deployment automation using Bourne shell scripts. Now the team could deploy a Helix server with our demonstration database and all of the ancillary tools that we needed to show our enterprise versioning capabilities. The user could simply run a script to start the containers that they needed for their demo, complete with all of the required connections and DNS namespace available so that the containers can find each other.
We’ve learned that the Docker environment is stable, reliable, and useful, and provided us with a very flexible demo environment. Even though we could run all these containers on a single VM, only about half the SE team had adopted this as the demo strategy. So we continued maintaining the old VM environments for the less intrepid.
Container Demo Nirvana
By 2017, it became apparent to us (and the rest of the world) that containers and automation are a great way to work. Even our software development and QA teams use containers for unit and application testing. Containers are especially helpful in an environment like demos — where we’re constantly changing and reconfiguring on demand.
Our container system was frequently iterated to make them more robust. Build and run scripts were created. Best practices established. Policies and procedures for updating were codified. Documentation written and adoption continues to grow. And, no matter where they were in the world, SEs could sync the small changes and easily apply them to their demo system.
Perforce added several major new products to our portfolio in 2017. With our well-established container architecture, adding Helix ALM, Hansoft, and Helix TeamHub (our second-generation Git code hosting solution) to the demo mix was relatively simple. We expect this technology to support our needs in the future as we add additional products, too. And all indications are that we’ll be able to use Kubernetes and other clustering and management technologies to enhance how we use the demo containers.
Today, whether an SE runs a demo out of the vSphere cluster (now bigger and faster) in our data center, or from their own Mac or PC laptop, we have the flexibility to configure our many solutions to show the customer exactly how our technology will address their specific challenges.
The automation we’ve created makes it far easier to maintain and far easier for new users to get started. And, because we keep all of our Docker configuration details in our corporate Helix server, we can update our Docker environment on the fly, and remote users can simply pickup changes by syncing a workspace, rather than sitting through a horrendously long download. The container demo is a big success, in my humble opinion.
I will end with only one caveat. We wrote this article to encourage experimentation and adoption of containers, and to share our journey. We aren’t sure if containers are appropriate for running a production Perforce server environment at scale. We haven’t researched or tested this. But in the fast-moving world of technology, we can’t rule out any application or deployment method, so stay tuned!