January 16, 2018

Achieve Faster CI Build Performance With Partitioned Workspaces

Continuous Integration

As DevOps teams increasingly adopt a continuous integration approach, there is a strong yearning for lightweight build processes. Helix Core provides development teams with myriad techniques to streamline continuous integration: p4 sync -p, dedicated build replicas, partitioned client workspaces, and read-only workspaces. While the latter approach expedites the build process by not fragmenting the respective database table, it doesn’t offer the capability to write build artifacts and other files to Perforce Helix. Partitioned client workspaces, on the other hand, enable build services to not only read files, but to also submit changes back to the server. Before delving into the benefits of partitioned client workspaces, let’s first examine client workspaces in general.


Part of the speed and power of a Perforce Helix server comes from the fact that client workspaces do not keep any hidden files on the user’s machine. Instead, the status of your workspace is stored on the central server in the db.have table. When a client workspace is updated via ‘p4 sync’, all the server does is compare the 'have' revisions and the 'head' revisions to determine which files need to be sent to the client.

This works extremely well when the client workspaces are reasonably stable and do not get continuously deleted and recreated again. In some build environments, however, this is not the case. Builds get moved from node to node, temporary client workspaces get created and deleted continuously, or even wiped and refreshed.

With the move to continuous integration, the number and frequency of builds being performed is going up. On many Perforce Helix servers, the highest load is not generated by the hundreds or thousands of developers and artists, but by the build farms in the background constantly syncing or removing hundreds of thousands of files.

This affects the db.have table, which over time gets bloated and fragmented, and this in turn can have an impact on performance and maintenance operations such as checkpoints.

Read-only Workspaces

In a previous blog post I wrote about read-only workspaces as a great tool for a build master to create client workspaces that do not fragment the central db.have table. The effects of this feature are quite dramatic:

  • Fragmentation of the db.have table is greatly reduced.
  • Write contention to the table virtually disappears[1].
  • Journal file sizes grow at a much slower rate.

The latter point is interesting. When updating read-only workspaces, the changes are not journaled because these workspaces are deemed ephemeral (I love that word), and build processes can typically re-create the workspace easily.

We thereby reduce write contention to the journal file. As a nice side-effect, replaying the journal for offline checkpoints becomes faster, and replication (which uses the journal file to keep the local databases up-to-date), is faster as well.

Partitioned Client Workspaces

Alas, there is one fly in the ointment: read-only workspaces are just that – read-only.

Sometimes a build process might want to write build artifacts and other files back to Perforce Helix. You can certainly create some clever build logic with two workspaces: one read-only workspace for the sources, and a read-write workspace for the artifacts. Many build systems are not geared up for this kind of trickery and expect a single workspace for reading and writing.

In 2016.2 Perforce introduced partitioned workspaces, which use the same logic as read-only­ workspaces but allow build processes to open files and submit the changes. Perforce Helix keeps track of the working state (e.g. files opened for edit) in the central tables db.working and db.locks.

Partitioned workspaces are created the same way as read-only workspaces, but with one major difference: you set the type to “partitioned”.

Most of the arguments for read-only workspaces hold true for partitioned workspaces as well, making this type of workspace a good choice for build services that need to submit changes as well.


There are a few caveats you should be aware of before you start deploying either kind of workspace.


Since either kind of workspace is not journaled, any server crash that requires a full recovery from checkpoint and journal will not restore the read-only or partitioned workspaces. This is in line with the ephemeral nature (that word again) of these objects. It is assumed build machines can always restart the build by recreating a workspace from scratch; this is common practice for many build tools.

If syncing to your build workspace takes several hours and you are very careful about preserving the workspace’s state, neither read-only nor partitioned workspaces are for you.

Effects on Referential Integrity

There is a funny side-effect that partitioned workspaces have on servers where paranoid administrators[2] run referential integrity checks using an undocumented tool Perforce has hidden in the Helix server. One of the references this tool checks is the relationship between “working” and “have”, but the tool does not go into each partitioned workspace to perform the same check. The end result is that the tool can report false “working” records with no “have” records, and if the administrator applies the resulting jnl.fix file without checking, the open file records will be removed again, leaving orphaned db.locks records in their wake.

In any case, we always recommend speaking to Perforce Support before applying the jnl.fix file.

Failover to an HA server

You might have a HA server deployed, that is, a Perforce Helix server configured as a replica of type standby or forwarding-standby. The purpose of this server is to failover quickly at any time and for the users to continue working (almost) uninterrupted.

Replicas use the journal file to update their own database, and, as you learned above, neither read-only nor partitioned workspaces get journaled. A failover will therefore interrupt all processes that use either type of workspace; these will need to be updated again after a failover. 

Alternative: Edge Servers

If you find any of the drawbacks of partitioned workspaces unacceptable, you might find that an edge server gives you what you need. This is a separate server created as a replica but with its own have and working (and a few other) tables. Many of our customers use edge servers for the build services as well as for remote operations in high-latency environments.

Be aware, though, that edge servers are more complicated to set up than partitioned workspaces and require a separate server instance. Horses for courses, as they say.


I hope this post gave you some ideas for how to get some reprieve from all those build processes for your busy Perforce Helix server, gaining you faster builds and access for your developers and artists.

As usual, reach out to me on Twitter @p4sven if you have any feedback or questions.

Happy hacking!

Sign Up For Helix Core For Free

Sign up for your free 30-day free trial of Helix Core or register for our next live demo.

[1] Perforce introduced lockless reads back in 2013.3 (thereby eliminating most read lock contention), but there can still be write contention.

[2] Only paranoid administrators are good administrators, right?