February 10, 2012

The Replication Train is Rolling in Perforce 2012.1

What's New
Healthcare
Product Branding

Over the last several years, Perforce has steadily improved its data replication technology. That’s good news if you have teams working in different parts of the world, or want to take advantage of development tools that put a lot of load on version control, like build automation. In other words, if you’re anywhere near modern software development, Perforce replication is good stuff.

The latest and greatest

Perforce replication started out with only metadata (the Perforce database), and expanded in the 2010.2 release to include full file content. A full replica was great for supporting automated read-only activities, like automatic builds, and of course also made read-only operations much faster for users at remote sites. Replicas also made for an easy disaster recovery (DR) option. The 2011.1 release smoothed some of the rough edges off of replica configuration and maintenance.

The 2012.1 release is baking as we speak, and will offer a pretty significant improvement to the way replicas are deployed and used. Replicas will now be assigned roles that define their purpose and the operations they support.

  • A normal replica is read-only and is intended for DR purposes.
  • A smart proxy (formally known as a forwarding replica) is intended to support normal user activity. Similar to a regular proxy, it is transparent to an end user. It caches all file and database content locally, so read operations like viewing file history are very fast. Write operations, like submitting files, are relayed on to the main server. A smart proxy blends some of the features of traditional proxies and read-only replicas.
  • A build server is intended to support automated builds and continuous integration. It has the additional capability of having workspaces that are purely local to the replica, so a build server’s workspace doesn’t require any data or communication from the main server. I’ll be writing more about these types of replicas in a future article.

Besides adding a few nice features, defining these roles makes it easier to deploy and administer replicas in different situations.

Want more details? When the 2012.1 beta is available, start by looking at the information for the new p4 server command.

The big picture

Taking a step back, Perforce is designing an architecture to solve some big problems:

  • Teams are getting larger and are often located at different sites around the world
  • The volume of data handled by teams is getting bigger
  • Modern practices in software development and other disciplines require a lot of automation

Simply put, we want to make sure that Perforce continues to scale up to meet the needs of the most advanced teams and products without making the sacrifices usually associated with distributed development, like isolating important data based on physical location. Being able to intelligently coordinate distributed development is just as important as providing good performance.

The federated architecture that we’re building consists of three basic tiers. I like to think of it as hub, spokes, and satellites.

  • The main server (hub) has the consistent view of all of the important metadata. Any user, anywhere, can see all the data they need to get the job done.
  • The replica servers (spokes) contain a copy of all of the main server’s data, plus a limited amount of purely local data that isn’t relevant to other sites. A user or an automated system can use these for a big chunk of their work without putting any load on the main server.
  • The satellites are local repositories like P4Sandbox. They’re fairly independent, but still connected to the overall system. These give individual users a lot of flexibility about how they work.

Federated Server Architecture

And if the cloud makes sense for your deployment, we have Perforce server and proxy images available for Amazon EC2, with more hosted goodness coming later this year.

So if you manage remote teams, use automated processes, or have a big user base, keep an eye out for the 2012.1 release. It takes a few more steps down the path to Perforce’s federated server architecture, and makes replicas more useful and easier to manage.