Distributed Perforce

Introduction

The 2013.2 Perforce release introduces two new modes of operation for a Perforce service:

  • A commit server stores the canonical archives and permanent metadata. In working terms, it is similar to a Perforce master server, but may not contain all workspace information.

  • An edge server contains a replicated copy of the commit server data and a unique, local copy of some workspace and work-in-progress information. It can process read-only operations and operations like p4 edit that only write to the local data. In working terms, it is somewhat similar to a forwarding replica, but contains local workspace data and can handle more operations with no reliance on the commit server.

The combination of a commit server and one or more edge servers is called a distributed Perforce service.

Note

Distributed builds upon the Perforce replication technology. Read and understand Perforce Replication before attempting to deploy a distributed Perfore service.

Since an edge server can handle most routine operations locally, a distributed Perforce service offloads a significant amount of processing work from the commit server and reduces data transmission between commit and edge servers.

From a user's perspective, most typical operations until the point of submit are handled by an edge server. With a forwarding replica, read operations, such as obtaining a list of files or viewing file history, are local. But with an edge server, syncing, checking out, merging, resolving, and reverting files are all local operations.

Managing distributed installations

A distributed Perforce service offers significant performance improvements in certain scenarios. All scenarios require additional management.

  • Each edge server maintains a unique set of workspace and work-in-progress data that must be backed up separately from the commit server.

  • Exclusive locks are global: establishing an exclusive lock requires communication with the commit server, which may incur network latency.

  • Shelving changes in a distributed environment typically occurs on an edge server. Shelving can occur on a commit server only while using a client workspace bound to the commit server. Normally, changelists shelved on an edge server are not shared between edge servers.

    The 2014.1 Perforce release allows changelists shelved on an edge server to be promoted to the commit server, making them available to other edge servers. See “Promoting shelved changelists” for details.

  • Auto-creation of users is not possible on edge servers.

Deploying commit and edge servers

You can deploy commit and edge servers incrementally. An existing master server may be reconfigured to act as a commit server, and serve in hybrid mode. The commit server continues to service all existing users, workspaces, proxies, and replicas with no change in behavior. The only immediate difference is that the commit server can now support edge servers.

Once a commit server is available, you can proceed to configure one or more edge servers. Deploying a single edge server for a pilot team is a good way to become familiar with edge server behavior and configuration.

Additional edge servers can be deployed periodically, giving you time to adjust any affected processes and educate users about any changes to their workflow.

Initially, running a commit server and edge server on the same machine can help achieve a full split of operations, which can make subsequent edge server deployments easier.

Hardware, sizing, and capacity

For an initial deployment of a distributed Perforce service, where the commit server acts in a hybrid mode, the commit server uses your current master server hardware. Over time you may see the performance load on the commit server drop as you add more edge servers. You can reevaluate commit server hardware sizing after the first year of operation.

An edge server handles a significant amount of work for users connected to that edge server. A sensible strategy is to repurpose an existing forwarding replica and monitor the performance load on that hardware. Repurposing a forwarding replica involves:

As you deploy more edge servers, you have the flexibility to deploy a smaller number of edge servers with more powerful hardware, or a larger number of edge servers each using less powerful hardware to service a smaller number of users.

You can also take advantage of replication filtering to reduce the volume of metadata and archive content on an edge server.

Note

An edge server maintains a unique copy of local workspace metadata, which is not shared with other edge servers or with the commit server.

Filtering edge server content can reduce the demands for storage and performance capacity.

As you transition to a distributed Perforce service and the commit server is only handling requests from edge servers, you may find that an edge server requires more hardware resources than the commit server.

Backup and high availability / disaster recovery (HA/DR) planning

A commit server can use the same backup and HA/DR strategy as a master server. Edge servers contain unique information and should have a backup and HA/DR plan. Whether an edge server outage is as urgent as a master server outage depends on your requirements. Therefore, an edge server may have an HA/DR plan with a less ambitious Recovery Point Objective (RPO) and Recovery Time Objective (RTO) than the commit server.

If a commit server must be rebuilt from backups, each edge server must be rolled back to a backup prior to the commit server's backup. Alternatively, if your commit server has no local users, the commit server can be rebuilt from a fully-replicated edge server (in this scenario, the edge server is a superset of the commit server).

Backing up and recovering an edge server is similar to backing up and restoring a standard Perforce server. As long as the edge server's replication state file is included in the backup, the edge server can be restored and resume service. If the edge server was offline for a long period of time, it may need some time to catch up on the activity on the commit server.

As part of a failover plan for a commit server, make sure that the edge servers are redirected to use the new commit server.

Note

For commit servers with no local users, edge servers could take significantly longer to checkpoint than the commit server. You may want to use a different checkpoint schedule for edge servers than commit servers. Journal rotations for edge servers could be scheduled at the same time as journal rotations for commit servers.

Replacing existing proxies and replicas

If you currently use Perforce proxies, evaluate whether these should be replaced with edge servers. If a proxy is delivering acceptable performance then it can be left in place indefinitely. You can use proxies in front of edge servers if necessary. Deploying commit and edge servers is notably more complex than a master server and proxy servers. Consider your environment carefully.

Of the three types of replicas available, forwarding replicas are the best candidates to be replaced with edge servers. An edge server provides a better solution than a forwarding replica for many use cases.

Build replicas can be replaced if necessary. If your build processes need to issue write commands other than p4 sync, an edge server is a good option. But if your build replicas are serving adequately, then you can continue to use them indefinitely.

Read-only replicas, typically used for disaster recovery, can remain in place. You can use read-only replicas as part of a backup plan for edge servers.

Moving users to an edge server

As you create new edge servers, you assign some users and groups to use that edge server.

  • Users need the P4PORT setting for the edge server.

  • User need to create a new workspace on the edge server or transfer an existing workspace to the new edge server. Transferring existing workspaces can be automated.

Promoting shelved changelists

The 2014.1 Perforce release allows changelists shelved on an edge server, which would normally be inaccessible from other edge servers, to be promoted to the commit server. Promoted shelved changelists are available to any edge server.

Promotion occurs when shelving a changelist by using the -p flag with the p4 shelve command.

For example, given two edge servers, edge1 and edge2, the process works as follows:

  1. Shelve and promote a changelist from edge1.

    edge1> p4 shelve -p -c 89
    
  2. The shelved changelist is now available to edge2.

    edge2> p4 describe -S 89
    
  3. Promotion is only required once.

    Subsequent p4 shelve commands automatically update the shelved changelist on the commit server, using server lock protection. For example, make changes on edge1 and refresh the shelved changelist:

    edge1> p4 shelve -r -c 89
    

    The updates can now be seen on edge2:

    edge2> p4 describe -S 89
    

Note

There is no mechanism to unpromote a shelved changelist; instead, delete the shelved files from the changelist.

Other considerations

As you deploy edge servers, give consideration to the following areas.

  • Labels

    In a distributed Perforce service, labels can be local to an edge server, or global.

  • Triggers

    Edge servers support new types of triggers for changelists and shelved changelists. If you enforce policy with triggers, evaluate whether a changelist or shelve trigger should execute on the commit server or the edge server.

    Similarly, edge servers are responsible for running form triggers on workspaces and some types of labels.

    You must ensure that all relevant trigger scripts and programs are deployed on each edge server.

    For more information on edge server triggers, sees “Triggers”.

  • Exclusive Opens

    Exclusive opens (+l filetype modifer) are global: establishing an exclusive open requires communication with the commit server, which may incur network latency.

  • Integrations with third party tools

    If you integrate third party tools, such as defect trackers, with Perforce, evaluate whether those tools should continue to connect to the master/commit server or could use an edge server instead. If the tools only access global data, then they can connect at any point. If they reference information local to an edge server, like workspace data, then they must connect to specific edge servers.

    Build processes can usefully be connected to a dedicated edge server, providing full Perforce functionality while isolating build workspace metadata. Using an edge server in this way is similar to using a build farm replica, but with the additional flexibility of being able to run write commands as part of the build process.

  • Files with propagating attributes

    In distributed environments, the following commands are not supported for files with propagating attributes: p4 copy, p4 delete, p4 edit, p4 integrate, p4 reconcile, p4 resolve, p4 shelve, p4 submit, and p4 unshelve. Integration of files with propagating attributes from an edge server is not supported; depending on the integration action, target, and source, either the p4 integrate or the p4 resolve command will fail.

    If your site makes use of this feature, direct these commands to the commit server, not the edge server. Perforce-supplied software does not presently set propagating attributes on files and is not known to be affected by this limitation.

  • Logging and auditing

    Edge servers maintain their own set of server and audit logs. Consider using structured logs for edge servers, as they auto-rotate and clean up with journal rotations. Incorporate each edge server's logs into your overall monitoring and auditing system.

    In particular, consider the use of the rpl.checksum.* configurables to automatically verify database tables for consistency during journal rotation, changelist submission, and table scans and unloads. Regularly monitor the integrity.csv structured log for integrity events.

  • Unload depot

    The unload depot may have different contents on each edge server. Clients and labels bound to an edge server are unloaded into the unload depot on that edge server, and are not displayed by the p4 clients -U and p4 labels -U commands on other edge servers.

    Be sure to include the unload depot as part of your edge server backups. Since the commit server does not verify that the unload depot is empty on every edge server, you must specify p4 depot -d -f in order to delete the unload depot from the commit server.

  • Future upgrades

    Commit and edge servers should be upgraded at the same time.

  • Time zones

    Commit and edge servers must use the same time zone.

  • Perforce Swarm

    The initial release of Swarm can usefully be connected to a commit server acting in hybrid mode or to an edge server for the users of that edge server. Full Swarm compatibility with multiple edge servers will be handled in a follow-on Swarm release. For more detailed information about using Swarm with edge servers, please contact Perforce Support .

Validation

As you deploy commit and edge servers, you can focus your testing and validation efforts in the following areas.

Supported deployment configurations

  • Hybrid mode: commit server also acting as a regular master server

  • Read-only replicas attached to commit and edge servers

  • Proxy server attached to an edge server

Backups

Exercise a complete backup plan on the commit and edge servers. Note that journal rotation on an edge server is not allowed.

Authentication

If you use authentication triggers or single sign-on (SSO), install the relevant triggers on all edge servers and verify the authentication process.

Migration scenarios

Note

The guidance in this section is known to be incomplete. If you do not find the information you need, or you find any inaccuracies, contact Perforce support for assistance .

Note

Perforce recommends that you become familiar with the concepts described in Perforce Replication and establish a master server and one or more replica servers prior to attempting any of the migrations in this section.

Configuring a master server for hybrid mode

Scenario: You have a master server. You want to convert your master to a hybrid server allowing it to act as both as master server and a commit server, in preparation for adding one or more edge servers.

  1. Choose a ServerID for your master server, if it doesn't have one already, and use p4 serverid to save it.

  2. Define a server spec for your master server or edit the existing one if it already has one, and set Services: commit-server.

Converting a forwarding replica to an edge server

Scenario: You currently have a master server and a forwarding replica. You want to convert your master server to a commit server and convert your forwarding replica to an edge server.

Depending on how your current master server and forwarding replica are set up, you may not have to do all of these steps.

  1. Have all the users of the forwarding replica either submit, shelve, or revert all of their current work, and have them delete their current workspaces.

  2. Stop your forwarding replica.

  3. Choose a ServerID for your master server, if it doesn't have one already, and use p4 serverid to save it.

  4. Define a server spec for your master server, or edit the existing one if it already has one, and set Services: commit-server.

  5. Use p4 server to update the server spec for your forwarding replica, and set Services: edge-server

  6. Update the replica server with your central server data by doing one of the following:

    • Use a checkpoint:

      1. Take a checkpoint of your central server, filtering out the db.have, db.working, db.resolve, db.locks, db.revsh, db.workingx, db.resolvex tables.

        See the Perforce Server Reference, an appendix in the Perforce System Administrator's Guide, for flags that can be used to produce a filtered journal dump file, specifically the -k and -K flags.

      2. Restore that checkpoint onto your replica.

      3. Remove the replica's state file.

    • Use replication:

      1. Start your replica on a separate port (so local users don't try to use it yet)

      2. Wait for it to pull the updates from the master.

      3. Stop the replica and remove the db.have, db.working, db.resolve, db.locks, db.revsh, db.workingx, db.resolvex tables.

  7. Start the replica; it is now an edge server.

  8. Have the users of the old forwarding replica start to use the new edge server:

    1. Create their new client workspaces and sync them.

You are now up and running with your new Edge Server.

Converting a build server to an edge server

Scenario: You currently have a master server and a build server. You want to convert your master server to a commit server and convert your build server to an edge server.

Build servers have locally-bound clients already, and it seems very attractive to be able to continue to use those clients after the conversion from a build-server to an Edge Server. There is one small detail:

  • On a build server, locally-bound clients store their have and view data in db.have.rp and db.view.rp.

  • On an edge server, locally-bound clients store their have and view data in db.have and db.view.

Therefore the process for converting a build server to an edge server is pretty simple:

  1. Define a ServerID and server spec for the master, setting Services: commit-server.

  2. Edit the server spec for the build-server and change Services: build-server to Services: edge-server.

  3. Shut down the build-server and do the following:

    1. rm db.have db.view db.locks db.working db.resolve db.revsh db.workingx db.resolvex

    2. mv db.have.rp db.have

    3. mv db.view.rp db.view

  4. Start the server; it is now an edge server and all of its locally-bound clients can continue to be used!

Migrating a workspace from a commit server or remote edge server to the local edge server

Scenario: You have a workspace on a commit or remote edge server that you want to move to the local edge server.

  1. Have all the workspace owners either submit or revert all of their current work and ensure that all shelved files are deleted.

  2. p4 unload -c workspace

    Execute this command against the Perforce service where the workspace is being migrated from. In this case, this would be the commit or remote edge server.

  3. p4 reload -c workspace -p protocol:host:port

    Execute this command against the local edge server, where the workspace is being migrated to. protocol:host:port refers to the commit or remote edge server the workspace is being migrated from.

Triggers

In a distributed Perforce service, triggers might run either on the commit server, or on the edge server, or perhaps on both. For more information on triggers, see the Perforce System Administrator's Guide.

Trigger scripts can determine whether they are running on a commit or edge server using the following trigger variables:

Trigger Variable

Description

%peerip%

The IP address of the proxy, broker, replica, or edge server.

%clientip%

The IP address of the machine whose user invoked the command, regardless of whether connected through a proxy, broker, replica, or edge server.

%submitserverid%

For a change-submit, change-content, or change-commit trigger in a distributed installation, the server.id of the edge server where the submit was run. See p4 serverid in the Perforce Command Reference for details.

When a trigger is executed on the commit server, %peerip% will match %clientip%.

For more information on trigger variables, see the Perforce System Administrator's Guide.

Edge triggers

Edge servers provide two trigger types:

Trigger Type

Description

edge-submit

Executes a pre-submit trigger on the edge server after changelist has been created, but prior to file transfer from the client to the edge server. The files are not necessarily locked at this point.

edge-content

Executes a mid-submit trigger on the edge server after file transfer from the client to the edge server, but prior to file transfer from the edge server to the commit server. At this point, the changelist is shelved.

Triggers on the edge server are executed one after another when invoked via p4 submit -e. For p4 submit, edge-submit triggers run immediately before the changelist is shelved, and edge-content triggers run immediately after the changelist is shelved. As edge-submit triggers run prior to file transfer to the edge server, these triggers cannot access file content.

The following edge-submit trigger is an MS-DOS batch file that rejects a changelist if the submitter has not had his change reviewed and approved. This trigger fires only on changelist submission attempts that affect at least one file in the //depot/qa branch.

@echo off
rem REMINDERS
rem - If necessary, set Perforce environment vars or use config file
rem - Set PATH or use full paths (C:\PROGRA~1\Perforce\p4.exe)
rem - Use short pathnames for paths with spaces, or quotes
rem - For troubleshooting, log output to file, for instance:
rem - C:\PROGRA~1\Perforce\p4 info >> trigger.log
if not x%1==x goto doit
echo Usage is %0[change#]
:doit
p4 describe -s %1|findstr "Review Approved...\n\n\t" > nul
if errorlevel 1 echo Your code has not been revewed for changelist %1
p4 describe -s %1|findstr "Review Approved...\n\n\t" > nul

To use the trigger, add the following line to your triggers table:

sampleEdge   edge-submit //depot/qa/...   "reviewcheck.bat %changelist%"
Hide partial matches
Highlight matches
0 matching pages