A theme of our recent MERGE conference in San Francisco was how DevOps is taking over the world. Related to that theme is the use of container technology, specifically Docker. We have been successfully using Docker in a variety of our internal projects at Perforce, and Ksenia Burlachenko gave a well-received presentation at MERGE on some of that (click here for her slides
). Because a couple customers at MERGE mentioned a longer term aim of “Dockerizing” all their applications, potentially including Helix Server (and/or replication instances), I decided to investigate how Docker might affect things. More to the point, while it is clear Docker has many benefits, are there any downsides?
Let me summarize briefly my high-level findings:
• The basic performance of p4d within a Docker container is very similar to outside a container when it comes to read/write of underlying db.* metadata files (from a directory shared with the host). See the “Branch Submit” benchmark results below.
• When using basic Docker network forwarding (from outside the container to the p4d inside the container), there can be a significant performance degradation, around 2x, which is due to the docker-proxy process (and not unexpected). An unexpected result, however, is that running the Docker container with “--net=host” (which uses host system network stack) does not significantly improve the performance. See the “Browse benchmark” results below.
For the gory details read on, or skip to the conclusion! But first a little digression regarding the background and architecture of Docker – and a few pointers to useful resources for those new to the topic.
Background - What is a Container?
The basic goal of a container is to package an application with its underlying dependencies, including code, runtime, system libraries and anything else it requires. The resulting package or container can then easily be shipped unmodified between different environments such as development, test, QA, pre-production, and production. For something that first saw the light of day in only 2013, it is amazing the momentum Docker enjoys, as well as the tremendous advances in its core technology. The building blocks of Docker, such as Linux container and related technologies such as cgroups and kernel namespaces, have been around for rather longer. But Docker found a sweet spot in combining said technologies in a way that was much easier to use and greater than the sum of its parts.
• Containers have similar resource isolation and allocation benefits to virtual machines, but a different architectural approach allows them to be much more portable and efficient.
• Docker users on average ship software 7X more after deploying Docker in their environment.
• Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it.
This is the promise, and while Docker delivers many advantages, there are of course some complexities to understand when implementing it—particularly for production systems.
Docker Architecture – Unified File System (Copy on write)
A Docker image is a static snapshot of a file system, based on a series of layers, each of which has a unique hash. Images are versioned and can be tagged (E.g. Ubuntu:14.04 or Ubuntu:latest). A Docker container is a running instance of an image. Docker uses AUFS (Another Unified File System) and each image layer is read-only. When the container is running, a writable top-most file system layer is created.
Let’s look at the history of our container which shows us the layers and their sizes:
~/benchmark$ docker history p4benchmark
IMAGE CREATED CREATED BY SIZE
bc29eb7387e8 44 hours ago /bin/sh -c #(nop) CMD ["/run_in_docker.sh"] 0 B
fd307858e6ed 44 hours ago /bin/sh -c #(nop) EXPOSE 1777/tcp 0 B
f0e6d83e59e3 46 hours ago /bin/sh -c #(nop) COPY file:1b62b51286d922508 151 B
7b93c7db720b 4 days ago /bin/sh -c #(nop) COPY file:dbaafa84747899a13 114 B
3dfae18d29da 4 days ago /bin/sh -c apt-get update;apt-get instal 65.29 MB
3a1326cf000a 4 days ago /bin/sh -c #(nop) ENV DEBIAN_FRONTEND=noninte 0 B
db005dd7ab5d 12 days ago /bin/sh -c #(nop) MAINTAINER Robert Cowham "r 0 B
b72889fa879c 13 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 187 MB
Most of the layers are very small, although one is 65 MB (when apt-get is used to install several packages) and the base layer is 187 MB. Note that the hash of the last (bottom) layer refers to the base Ubuntu image, as we can see with the following command.
~/benchmark$ docker history ubuntu:14.04
IMAGE CREATED CREATED BY SIZE
b72889fa879c 13 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 187 MB
This layering makes it easy to share common layers between images (and thus containers), which reduces data copying and makes it faster to startup multiple containers based on the same image.
Running a Docker Container and Persisting Data
A container is typically run with the “docker run” command (examples discussed in detailed section below), and then can be seen with “docker ps”.
~/benchmark$ docker run -v /home/rcowham/benchmark/p4:/p4 p4benchmark /run_in_docker.sh
~/benchmark$ docker ps
CONTAINER ID IMAGE COMMAND STATUS
51a70c423936 p4benchmark "/run_in_docker.sh" Up 2 minutes
The container actually runs as a subprocess of the Docker daemon (server) on the host system, which typically makes it very fast to start. Normally, when you run your container the top-most file system layer is writable, but when the container finishes any changes are discarded. This means that running any process such as a database, one for which you want written data to persist, needs to be handled differently. The simplest way to do this is to mount a directory on the host system within the container. In the above “docker run” example we use the –v flag to mount a host directory within the container as /p4. See the documentation for further options: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Docker Networking Options
By default, the Docker server daemon connects the host network to the network within the container via a bridge. This allows easy forwarding of host ports to ports within the container; e.g., by passing “-p 2345:1666” to the “docker run” command, the host port 2345 is connected to port 1666 inside the container. As noted in the benchmark below, this is simple but has a performance cost.
For further information, the links I found useful are:
The Helix Server Benchmarks
To help address questions around how your server configuration (storage/RAM/operating system) might stack up against best-performing configurations for Perforce Helix, our Performance Lab has designed a set of benchmarks to measure the performance of critical Perforce operations. These are well documented in our Knowledge Base:
That article also links to our “Benchmarks Results” page, where you can post the results of your runs and compare and contrast against other people’s results. For the purposes of this article, I used both benchmarks:
• Branchsubmit – measures, among other things the rate at which files can be committed to the Perforce server
• Browse – provides a method to evaluate the CPU performance and network utilization of your P4D server for lots of small client operations (fstat and filelog)
The README.html (.md is source) describes the specifics. The KB article also lists datasets you need to download from the FTP site (note size of checkpoint is 1.4GB, and the resulting database needs 40GB of free disk to run!)
Branch Submit Benchmark Results
This benchmark required very little customization as it is configured to all run on the server machine. I was using 16.1 p4d, on Linux X86_64 with 64 GB of RAM. As noted in the benchmark docs, it is worth running the “setup” command once, and then the “runme” two or three times to ensure file system cache is used.
| ||Native ||Docker||Difference|
|submitCommitTime||2,140 ||2,176 ||102%|
|submitCommitRate||32,712 ||32,188 ||98%|
The commit rate for native p4d after file system cache was warmed up was very respectable, in the range of nearly 33,000 files per second. This result would put the server configuration on the first page of published benchmark results! And the results, when run inside Docker, were within a few percent. So no significant overhead for accessing the host filesystem from within the container.
Browse Benchmark Results
This was more interesting and took a little more configuration effort. The base script requires multiple hosts and a compiled test client to generate lots of small commands against the p4d. I modified the script to run both client and server on the same machine as it was simpler in my environment. I did have to tweak a Linux setting as the tests produce so many TCP connections so quickly that it exhausted the standard settings on the machine - so changed sysctl net.ipv4.ip_local_port_range from default range “32768 61000” to “15000 61000”.
|Docker||Different to Native ||Docker net=host || Difference to native |
Thus, we can see a significant performance penalty (middle column). Using “top” it was easy to see that the docker-proxy process was using around 50% of a CPU during the tests (with the p4d in 10-20% range), and it is clear that the port forwarding from host port to the p4d port inside the container is the cause. The final two columns show the same test where we configured Docker to use the host networking stack (using parameter “--net=host” for “docker run” command). I expected this to make quite a difference over base Docker, but it clearly didn’t. It’s certainly possible that I misconfigured something – feedback welcome!
The benefits of Docker for creating Helix test environments are obvious. We are migrating various test and demo scenarios to use Docker, because what previously took multiple Virtual Machines can now be done with multiple containers. Startup times of minutes become seconds. The resource overhead of running multiple VMs on a single host is also significantly reduced – very useful for my laptop! For test/demo we will also be exploring having demo data inside a data container referenced by others in a “cluster” – lots of great possibilities! The obvious issue with running a production Helix Server (or replica) inside a container is a concern about the networking overhead shown by the browse benchmark. I will be doing further work to analyze this and see if it can be reduced, or better explained. Meanwhile, doing this in production will depend on the profile of the usage of your server. It may still make sense because of the ease of management.
This is a space that is advancing very rapidly. If you haven’t got your feet wet, then I would strongly advise it – and share your experiences!