What is replication?
Replication is the duplication of server data from one Perforce Server to another Perforce Server, ideally in real time. You can use replication to:
Provide warm standby servers
A replica server can function as an up-to-date warm standby system, to be used if the master server fails. Such a replica server requires that both server metadata and versioned files are replicated.
Reduce load and downtime on a primary server
Long-running queries and reports, builds, and checkpoints can be run against a replica server, reducing lock contention. For checkpoints and some reporting tasks, only metadata needs to be replicated. For reporting and builds, replica servers need access to both metadata and versioned files.
Provide support for build farms
A replica with a local (non-replicated) storage for client workspaces (and their respective have lists) is capable of running as a build farm.
Forward write requests to a central server
A forwarding replica holds a readable cache of both versioned files and metadata, and forwards commands that write metadata or file content towards a central server.
Combined with a centralized authorization server (see Centralized authorization server (P4AUTH)), Perforce administrators can configure the Perforce Broker (see “The Perforce Broker”) to redirect commands to replica servers to balance load efficiently across an arbitrary number of replica servers.
Most replica configurations are intended for reading of data. If you require read/write access to a remote server, use either a forwarding replica, a distributed Perforce service, or the Perforce Proxy. See Configuring a forwarding replica, “Commit-edge Architecture” and “Perforce Proxy” for details.
- As a general rule, All replica servers must be at the same release level or at a release later as the master server. Any functionality that requires an upgrade for the master requires an upgrade for the replica, and vice versa.
- All replica servers must have the same Unicode setting as the master server.
- All replica servers must be hosted on a filesystem with the same case-sensitivity behavior as the master server’s filesystem.
p4 pull(when replicating metadata) does not read compressed journals. The master server must not compress journals until the replica server has fetched all journal records from older journals. Only one metadata-updating
p4 pullthread may be active at one time.
- The replica server does not need a duplicate license file.
The master and replica servers must have the same time zone setting.
On Windows, the time zone setting is system-wide.
On UNIX, the time zone setting is controlled by the
TZenvironment variable at the time the replica server is started.
Replication of Perforce servers depends upon several commands and configurables:
|Command or Feature||Typical use case|
A command that can replicate both metadata and versioned files, and report diagnostic information about pending content transfers.
A replica server can run multiple
A configuration mechanism that supports multiple servers.
A configuration mechanism that defines a server in terms of its
offered services. In order to be effective, the
A command to set or display the unique identifier for a Perforce
Server. On startup, a server takes its ID from the contents of a
Causes the replica to schedule a transfer of the contents of any damaged or missing revisions.
The command reports
For the transfer to work on a replica with
Perforce Servers can be identified and configured by name.
When you use
A new type of user intended for authentication of server-to-server communications. Service users have extremely limited access to the depot and do not consume Perforce licenses.
To make logs easier to read, create one service user on your master server for each replica or proxy in your network of Perforce Servers.
Replica servers can be configured to automatically reject user
commands that attempt to modify metadata (
Replica servers can be configured to filter in (or out) data on client workspaces and file revisions.
You can use the
You can use the
Depot file access
Replica servers can be configured to automatically reject user commands that attempt to modify archived depot files (the “library”).
These options can also be set using
As with the Perforce Proxy, you can use
You can set
A replica server with
Replica servers track the most recent journal position in a small text file that holds a byte offset. When you stop either the master server or a replica server, the most recent journal position is recorded on the replica in the state file.
Upon restart, the replica reads the state file and picks up where it left off; do not alter this file or its contents. (When the state file is written, a temporary file is used and moved into place, which should preserve the existing state file if something goes wrong when updating it. If the state file should be empty or missing, the replica server will refetch from the start of its last used journal position.)
By default, the state file is named
The Perforce Broker can be used for load balancing, command redirection, and more. See “The Perforce Broker” for details.
Replication requires uncompressed journals. Starting the master using
p4d -jc -z command breaks replication; use the
instead to prevent journals from being compressed.
The p4 pull command
p4 pull command provides the most general solution
for replication. Use
p4 pull to configure a replica server that:
- replicates versioned files (the
,vfiles that contain the deltas that are produced when new versions are submitted) unidirectionally from a master server.
- replicates server metadata (the information contained in the
db.*files) unidirectionally from a master server.
startup.nconfigurable to automatically spawn as many
p4 pullprocesses as required.
A common configuration for a warm standby server is one in which one (and only one)
p4 pullprocess is spawned to replicate the master server’s metadata, and multiple
p4 pull -uprocesses are spawned to run in parallel, and continually update the replica’s copy of the master server’s versioned files.
startup.nconfigurables are processed sequentially. Processing stops at the first gap in the numerical sequence; any commands after a gap are ignored.
Although you can run
p4 pull from the command line for testing
and debugging purposes, it’s most useful when controlled by the
startup.n configurables, and in conjunction with named servers,
service users, and centrally-managed configurations.
--batch option to the
p4 pull specifies the number of
files a pull thread should process in a single request. The default
1 is usually adequate. For high-latency configurations, a
larger value might improve archive transfer speed for large numbers of
small files. (Use of this option requires that both master and replica
be at version 2015.2 or higher.)
rpl.compress configurable allows you to compress journal
record data that is transmitted using
If you are running a replica with monitoring enabled and you have not
configured the monitor table to be disk-resident, you can run the
following command to get more precise information about what pull
threads are doing. (Remember to set
p4 monitor show -sB -la -L
Command output would look like this:
31701 B uservice-edge3 00:07:24 pull sleeping 1000 ms [server.locks/replica/49,d/pull(W)]
Server names and P4NAME
To set a Perforce server name, set the
P4NAME environment variable or
-In command line option to
p4d when you start the
server. Assigning names to servers is essential for configuring
replication. Assigning server names permits most of the server
configuration data to be stored in Perforce itself, as an alternative to
using startup options or environment values to specify configuration
details. In replicated environments, named servers are a necessity,
p4 configure settings are replicated from the master
server along with other Perforce metadata.
For example, if you start your master server as follows:
p4d -r /p4/master -In master -p master:11111
And your replica server as follows:
p4d -r /p4/replica -In Replica1 -p replica:22222
You can use
p4 configure on the master to control settings on
both the master and the replica, because configuration settings are
part of a Perforce server’s metadata and are replicated accordingly.
For example, if you issue following commands on the master server:
p4 -p master:11111 configure set master#monitor=2$
p4 -p master:11111 configure set Replica1#monitor=1
After the configuration data has been replicated, the two servers have
different server monitoring levels. That is, if you run
master:11111, you see both active and idle processes,
because for the server named
monitor configurable is set
2. If you run
p4 monitor show against
active processes are shown, because for the
Because the master (and each replica) is likely to have its own journal
and checkpoint, it is good practice to use the
configurable (for each named server) to ensure that their prefixes are
p4 configure set master#journalPrefix=/master_checkpoints/master$
p4 configure set Replica1#journalPrefix=/replica_checkpoints/replica
For more information, see:
Server IDs: the p4 server and p4 serverid commands
You can further define a set of services offered by a Perforce server by
p4 server and
p4 serverid commands.
Configuring the following servers require the use of a server spec:
- Commit server: central server in a distributed installation
- Edge server: node in a distributed installation
- Build server: replica that supports build farm integration
- Depot master: commit server with automated failover
- Depot standby: standby replica of the depot master
- Standby server: read-only replica that uses
- Forwarding standby: forwarding replica that uses
p4 serverid command creates (or updates) a small text file
server.id file always resides in a server’s
p4 server command can be used to maintain a list of all
servers known to your installation. It can also be used to create a
unique server ID that can be passed to the
p4 serverid command,
and to define the services offered by any server that, upon startup,
reads that server ID from a
server.id file. The
command can also be used to set a server’s name (
There are three types of Perforce users:
service users. A
standard user is a traditional user of
operator user is intended for human or automated system
administrators, and a
service user is used for server-to-server
authentication, as part of the replication process.
Service users are useful for remote depots in single-server environments, but are required for multi-server and distributed environments.
service user for each master, replica, or proxy server that
you control. Doing so greatly simplifies the task of interpreting your
server logs. Service users can also help you improve security, by
requiring that your edge servers and other replicas have valid login
tickets before they can communicate with the master or commit server.
Service users do not consume Perforce licenses.
A service user can run only the following commands:
To create a service user, run the command:
p4 user -f service1
The standard user form is displayed. Enter a new line to set the new
Type: to be
service; for example:
User: service1 Email: email@example.com FullName: Service User for Replica Server 1 Type: service
By default, the output of
p4 users omits service users. To
include service users, run
p4 users -a.
Tickets and timeouts for service users
A newly-created service user that is not a member of any groups is
subject to the default ticket timeout of 12 hours. To avoid issues that
arise when a service user’s ticket ceases to be valid, create a group
for your service users that features an extremely long timeout, or to
unlimited. On the master server, issue the following command:
p4 group service_users
service1 to the list of
Users: in the group, and set the
PasswordTimeout: values to a large value or to
Group: service_users Timeout: unlimited PasswordTimeout: unlimited Subgroups: Owners: Users: service1
Service users must have a ticket created with the
p4 login for
replication to work.
Permissions for service users
On the master server, use
p4 protect to grant the service user
super permission. Service users are tightly restricted in the commands
they can run, so granting them
super permission is safe.
Server options to control metadata and depot access
When you start a replica that points to a master server with
you must specify both the
-M (metadata access) and a
access) options, or set the configurables
db.replication (access to
lbr.replication (access the depot’s library of versioned
files) to control which Perforce commands are permitted or rejected by
the replica server.
P4TARGET to the the fully-qualified domain name or IP address of
the master server from which a replica server is to retrieve its data.
You can set
P4TARGET explicitly, specify it on the
line with the
option, or you can use
p4 configure to set a
P4TARGET for each
named replica server. See the table below for the available
If you specify a target,
p4d examines its configuration for
startup.n commands: if no valid
p4 pull commands are
p4d runs and waits for the user to manually start a
p4 pull command. If you omit a target,
p4d assumes the
existence of an external metadata replication source such as
replicate. See p4 pull vs. p4 replicate for details.
Listen on/connect to an IPv4 address/port only.
Listen on/connect to an IPv6 address/port only.
Attempt to listen on/connect to an IPv4 address/port. If this fails, try IPv6.
Attempt to listen on/connect to an IPv6 address/port. If this fails, try IPv4.
Listen on/connect to an IPv4 address/port only, using SSL encryption.
Listen on/connect to an IPv6 address/port only, using SSL encryption.
Attempt to listen on/connect to an IPv4 address/port. If this fails, try IPv6. After connecting, require SSL encryption.
Attempt to listen on/connect to an IPv6 address/port. If this fails, try IPv4. After connecting, require SSL encryption.
P4TARGET can be the hosts' hostname or its IP address; both IPv4 and
IPv6 addresses are supported. For the
listen setting, you can use the
* wildcard to refer to all IP addresses, but only when you are not
using CIDR notation.
If you use the
* wildcard with an IPv6 address,
you must enclose the entire IPv6 address in square brackets. For
[2001:db8:1:2:*] is equivalent to
Best practice is to use CIDR notation, surround IPv6 addresses with square
brackets, and to avoid the
Server startup commands
You can configure a Perforce Server to automatically run commands at
startup using the
p4 configure as follows:
p4 configure set "servername#startup.n=command"
n represents the order in which the commands are executed: the
command specified for
startup.1 runs first, then the command for
startup.2, and so on. The only valid startup command is
p4 pull vs. p4 replicate
Perforce also supports a more limited form of replication based on the
p4 replicate command. This command does not replicate file
content, but supports filtering of metadata on a per-table basis.
For more information about
p4 replicate, see "Perforce Metadata
Replication" in the Perforce Knowledge Base:
Enabling SSL support
To encrypt the connection between a replica server and its end users,
the replica must have its own valid private key and certificate pair in
the directory specified by its
P4SSLDIR environment variable.
Certificate and key generation and management for replica servers works
the same as it does for the (master) server. See
Enabling SSL support. The users' Perforce applications must be
configured to trust the fingerprint of the replica server.
To encrypt the connection between a replica server and its master, the
replica must be configured so as to trust the fingerprint of the master
server. That is, the user that runs the replica
p4d (typically a
service user) must create a
P4TRUST file (using
p4 trust) that
recognizes the fingerprint of the master Perforce Server.
P4TRUST variable specifies the path to the SSL trust file. You
must set this environment variable in the following cases:
- for a replica that needs to connect to an SSL-enabled master server, or
- for an edge server that needs to connect to an SSL-enabled commit server.
Uses for replication
Here are some situations in which replica servers can be useful.
For a failover or warm standby server, replicate both server metadata and versioned files by running two
p4 pullcommands in parallel. Each replica server requires one or more
p4 pull -uinstances to replicate versioned files, and a single
p4 pullto replicate the metadata.
If you are using
p4 pullfor both metadata and
p4 pull -ufor versioned files, start your replica server with
p4d -t protocol:host:port -Mreadonly -Dreadonly. Commands that require read-only access to server metadata and to depot files will succeed. Commands that attempt to write to server metadata and/or depot files will fail gracefully.
For a detailed example of this configuration, see Configuring a read-only replica.
To configure an offline checkpointing or reporting server, only the master server’s metadata needs to be replicated; versioned files do not need to be replicated.
p4 pullfor metadata-only replication, start the server with
p4d -t protocol:host:port -Mreadonly -Dnone. You must specify a target. Do not configure the server to spawn any
p4 pull -ucommands that would replicate the depot files.
In either scenario, commands that require read-only access to server metadata will succeed and commands that attempt to write to server metadata or attempt to access depot files will be blocked by the replica server.
Replication and protections
To apply the IP address of a replica user’s workstation against the
protections table, prepend the string
proxy- to the workstation’s IP
For instance, consider an organization with a remote development site
with workstations on a subnet of
192.168.10.0/24. The organization
also has a central office where local development takes place; the
central office exists on the
10.0.0.0/8 subnet. A Perforce service
resides in the
10.0.0.0/8 subnet, and a replica resides in the
192.168.10.0/24 subnet. Users at the remote site belong to the group
remotedev, and occasionally visit the central office. Each subnet also
has a corresponding set of IPv6 addresses.
To ensure that members of the
remotedev group use the replica while
working at the remote site, but do not use the replica when visiting the
local site, add the following lines to your protections table:
list group remotedev 192.168.10.0/24 -//... list group remotedev [2001:db8:16:81::]/48 -//... write group remotedev proxy-192.168.10.0/24 //... write group remotedev proxy-[2001:db8:16:81::]/48 //... list group remotedev proxy-10.0.0.0/8 -//... list group remotedev proxy-[2001:db8:1008::]/32 -//... write group remotedev 10.0.0.0/8 //... write group remotedev proxy-[2001:db8:1008::]/32 //...
The first line denies
list access to all users in the
group if they attempt to access Perforce without using the replica from
their workstations in the
192.168.10.0/24 subnet. The second line
denies access in identical fashion when access is attempted from the
The third line grants
write access to all users in the
group if they are using the replica and are working from the
192.168.10.0/24 subnet. Users of workstations at the remote site must
use the replica. (The replica itself does not have to be in this subnet,
for example, it could be at
192.168.20.0.) The fourth line grants
access in identical fashion when access is attempted from the IPV6
Similarly, the fifth and sixth lines deny
list access to
users when they attempt to use the replica from workstations on the
central office’s subnets (
seventh and eighth lines grant write access to
remotedev users who access
the Perforce server directly from workstations on the central office’s
subnets. When visiting the local site, users from the
must access the Perforce server directly.
When the Perforce service evaluates protections table entries, the
dm.proxy.protects configurable is also evaluated.
dm.proxy.protects defaults to
1, which causes the
proxy- prefix to
be prepended to all client host addresses that connect via an
intermediary (proxy, broker, replica, or edge server), indicating that
the connection is not direct.
0 removes the
proxy- prefix and
allows you to write a single set of protection entries that apply both
to directly-connected clients as well as to those that connect via an
intermediary. This is more convenient but less secure if it matters that
a connection is made using an intermediary. If you use this setting, all
intermediaries must be at release 2012.1 or higher.
How replica types handle requests
One way of explaining the differences between replica types is to describe how each type handles user requests; whether the server processes them locally, whether it forwards them, or whether it returns an error. The following table describes these differences.
- Read only commands include
p4 user -o
- Work-in-progress commands include
- Global update commands include
|Replica type||Read-only commands||p4 sync, p4 client||Work-in-progress commands||Global update commands|
Depot standby, standby, replica
Forwarding standby, forwarding replica
Edge server, workspace server
Standard server, depot master, commit server
Configuring a read-only replica
To support warm standby servers, a replica server requires an up-to-date copy of both the master server’s metadata and its versioned files.
Replication is asynchronous, and a replicated server is not recommended as the sole means of backup or disaster recovery. Maintaining a separate set of database checkpoints and depot backups (whether on tape, remote storage, or other means) is advised. Disaster recovery and failover strategies are complex and site-specific. Perforce Consultants are available to assist organizations in the planning and deployment of disaster recovery and failover strategies. For details, see:
The following extended example configures a replica as a warm standby server for an existing Perforce Server with some data in it. For this example, assume that:
- Your master server is named
Masterand is running on a host called
master, using port 11111, and its server root directory is
- Your replica server will be named
Replica1and will be configured to run on a host machine named
replica, using port
22222, and its root directory will be
- The service user name is
You cannot define
P4NAME using the
p4 configure command,
because a server must know its own name to use values set by
You cannot define
P4ROOT using the
p4 configure command, to
avoid the risk of specifying an incorrect server root.
Master server setup
To define the behavior of the replica, you enter configuration
information into the master server’s
db.config file using the
configure set command. Configure the master server first; its
settings will be replicated to the replica later.
To configure the master, log in to Perforce as a superuser and perform the following steps:
To set the server named
master:11111as the master server to pull metadata and versioned files, issue the command:
p4 -p master:11111 configure set Replica1#P4TARGET=master:11111
Perforce displays the following response:
For server Replica1, configuration variable 'P4TARGET' set to 'master:11111'
To avoid confusion when working with multiple servers that appear identical in many ways, use the
-uoption to specify the superuser account and
-pto explicitly specify the master Perforce server’s host and port.
These options have been omitted from this example for simplicity. In a production environment, specify the host and port on the command line.
Replica1server to save the replica server’s log file using a specified file name. Keeping the log names unique prevents problems when collecting data for debugging or performance tracking purposes.
p4 configure set Replica1#P4LOG=replica1Log.txt
1, which is equivalent to specifying the
-vserver=1server startup option:
p4 configure set Replica1#server=1
To enable process monitoring, set
p4 configure set Replica1#monitor=1
To handle the
Replica1replication process, configure the following three
startup.ncommands. (When passing multiple items separated by spaces, you must wrap the entire set value in double quotes.)
The first startup process sets
p4 pullto poll once every second for journal data only:
p4 configure set "Replica1#startup.1=pull -i 1"
The next two settings configure the server to spawn two
p4 pullthreads at startup, each of which polls once per second for archive data transfers.
p4 configure set "Replica1#startup.2=pull -u -i 1"$
p4 configure set "Replica1#startup.3=pull -u -i 1"
p4 pull -ucommand creates a separate thread for replicating archive data. Heavily-loaded servers might require more threads, if archive data transfer begins to lag behind the replication of metadata. To determine if you need more
p4 pull -uprocesses, read the contents of the
rdb.lbrtable, which records the archive data transferred from the master Perforce server to the replica.
To display the contents of this table when a replica is running, run:
p4 -p replica:22222 pull -l
Likewise, if you only need to know how many file transfers are active or pending, use
p4 -p replica:22222 pull -l -s.
p4 pull -l -sindicates a large number of pending transfers, consider adding more
p4 pull -u startup.commands to address the problem.
If a specific file transfer is failing repeatedly (perhaps due to unrecoverable errors on the master), you can cancel the pending transfer with
p4 pull -d -f, where file and rev refer to the file and revision number.
db.replication(metadata access) and
lbr.replication(depot file access) configurables to readonly:
p4 configure set Replica1#db.replication=readonly$
p4 configure set Replica1#lbr.replication=readonly
Because this replica server is intended as a warm standby (failover) server, both the master server’s metadata and its library of versioned depot files are being replicated. When the replica is running, users of the replica will be able to run commands that access both metadata and the server’s library of depot files.
Create the service user:
p4 user -f service
The user specification for the
serviceuser opens in your default editor. Add the following line to the user specification:
Save the user specification and exit your default editor.
By default, the service user is granted the same 12-hour login timeout as standard users. To prevent the service user’s ticket from timing out, create a group with a long timeout on the master server. In this example, the
Timeout:field is set to two billion seconds, approximately 63 years:
p4 group service_group
Users: service Timeout: 2000000000
For more details, seeTickets and timeouts for service users.
Set the service user protections to
superin your protections table. (See Permissions for service users.) It is good practice to set the security level of all your Perforce Servers to at least 1 (preferably to 3, so as to require a strong password for the service user, and ideally to 4, to ensure that only authenticated service users may attempt to perform replica or remote depot transactions.)
p4 configure set security=4$
Replica1configurable for the
p4 configure set Replica1#serviceUser=service
This step configures the replica server to authenticate itself to the master server as the
serviceuser; this is equivalent to starting
If the user running the replica server does not have a home directory, or if the directory where the default
.p4ticketsfile is typically stored is not writable by the replica’s Perforce server process, set the replica
P4TICKETSvalue to point to a writable ticket file in the replica’s Perforce server root directory:
p4 configure set "Replica1#P4TICKETS=/p4/replica/.p4tickets"
Creating the replica
To configure and start a replica server, perform the following steps:
Boot-strap the replica server by checkpointing the master server, and restoring that checkpoint to the replica:
p4 admin checkpoint
(For a new setup, we can assume the checkpoint file is named
Move the checkpoint to the replica server’s
P4ROOTdirectory and replay the checkpoint:
p4d -r /p4/replica -jr $P4ROOT/checkpoint.1
Copy the versioned files from the master server to the replica.
Versioned files include both text (in RCS format, ending with
,v) and binary files (directories of individual binary files, each directory ending with
,d). Ensure that you copy the text files in a manner that correctly translates line endings for the replica host’s filesystem.
If your depots are specified using absolute paths on the master, use the same paths on the replica. (Or use relative paths in the
Map:field for each depot, so that versioned files are stored relative to the server’s root.)
To create a valid ticket file, use
p4 loginto connect to the master server and obtain a ticket on behalf of the replica server’s service user. On the machine that will host the replica server, run:
p4 -u service -p master:11111 login
Then move the ticket to the location that holds the
P4TICKETSfile for the replica server’s service user.
At this point, your replica server is configured to contact the master server and start replication. Specifically:
- A service user (
service) in a group (
service_group) with a long ticket timeout
- A valid ticket for the replica server’s service user (from
A replicated copy of the master server’s
db.config, holding the following preconfigured settings applicable to any server with a
- A specified service user (named
service), which is equivalent to specifying
-u serviceon the command line
- A target server of
master:11111, which is equivalent to specifying
-t master:11111on the command line
readonly, which is equivalent to specifying
-D readonlyon the command line
- A series of
p4 pullcommands configured to run when the master server starts
- A specified service user (named
Starting the replica
To name your server
P4NAME or specify the
and start the replica as follows:
p4d -r /p4/replica -In Replica1 -p replica:22222 -d
When the replica starts, all of the master server’s configuration
information is read from the replica’s copy of the
(which you copied earlier). The replica then spawns three
pull threads: one to poll the master server for metadata, and two to
poll the master server for versioned files.
p4 info command displays information about replicas and
service fields for untagged output as well as tagged output.
Testing the replica
Testing p4 pull
To confirm that the
p4 pull commands (specified in
startup.n configurations) are running, issue
the following command:
p4 -u super -p replica:22222 monitor show -a18835 R service00:04:46 pull -i 1 18836 R service00:04:46 pull -u -i 1 18837 R service00:04:46 pull -u -i 1 18926 R super 00:00:00 monitor show -a
If you need to stop replication for any reason, use the
p4 -u super -p replica:22222 monitor terminate 18837process '18837' marked for termination
To restart replication, either restart the Perforce server process, or manually restart the replication command:
p4 -u super -p replica:22222 pull -u -i 1
p4 pull and/or
p4 pull -u processes are
terminated, read-only commands will continue to work for replica users
as long as the replica server’s
p4d is running.
Testing file replication
Create a new file under your workspace view:
echo "hello world" > myfile
Mark the file for add:
p4 -p master:11111 add myfile
And submit the file:
p4 -p master:11111 submit -d "testing replication"
Wait a few seconds for the pull commands on the replica to run, then check the replica for the replicated file:
p4 -p replica:22222 print //depot/myfile//depot/myfile#1 - add change 1 (text) hello world
If a file transfer is interrupted for any reason, and a versioned file is not present when requested by a user, the replica server silently retrieves the file from the master.
Replica servers in
-M readonly -D readonly mode will retrieve versioned
files from master servers even if started without a
p4 pull -u
command to replicate versioned files to the replica. Such servers act as
"on-demand" replicas, as do servers running in
-M readonly -D ondemand
mode or with their
lbr.replication configurable set to
Administrators: be aware that creating an on-demand replica of this sort
can still affect server performance or resource consumption, for example,
if a user enters a command such as
p4 print //..., which reads
every file in the depot.
Verifying the replica
When you copied the versioned files from the master server to the
replica server, you relied on the operating system to transfer the
files. To determine whether data was corrupted in the process, run
p4 verify on the replica server:
p4 verify //...
Any errors that are present on the replica but not on the master
indicate corruption of the data in transit or while being written to
disk during the original copy operation. (Run
p4 verify on a
regular basis, because a failover server’s storage is just as vulnerable
to corruption as a production server.)
Using the replica
You can perform all normal operations against your master server (
-p master:11111 ). To reduce the load on the master
server, direct reporting (read-only) commands to the replica (
replica:22222 ). Because the replica is running in
readonly -D readonly mode, commands that read both metadata and depot
file contents are available, and reporting commands (such as
p4 jobs, and others) work normally. However, commands that
update the server’s metadata or depot files are blocked.
Commands that update metadata
Some scenarios are relatively straightforward: consider a command such
p4 sync. A plain
p4 sync fails, because whenever you
sync your workspace, the Perforce Server must update its metadata (the
"have" list, which is stored in the
db.have table). Instead, use
p4 sync -p to populate a workspace without updating the have
p4 -p replica:22222 sync -p //depot/project/...@1234
This operation succeeds because it does not update the server’s metadata.
Some commands affect metadata in more subtle ways. For example, many
Perforce commands update the last-update time that is associated with a
specification (for example, a user or client specification). Attempting
to use such commands on replica servers produces errors unless you use
-o option. For example,
p4 client (which updates the
Access: fields of the client specification) fails:
p4 -p replica:22222 client replica_clientReplica does not support this command.
p4 client -o works:
p4 -p replica:22222 client -o replica_client(client spec is output to STDOUT)
If a command is blocked due to an implicit attempt to write to the
server’s metadata, consider workarounds such as those described above.
(Some commands, like
p4 submit, always fail, because they
attempt to write to the replica server’s depot files; these commands are
blocked by the
-D readonly option.)
Using the Perforce Broker to redirect commands
You can use the Perforce Broker with a replica server to redirect read-only
commands to replica servers. This approach enables all your users to
connect to the same
protocol:host:port setting (the broker).
In this configuration, the broker is configured to transparently redirect
key commands to whichever Perforce Server is appropriate to the task at
For an example of such a configuration, see "Using P4Broker With Replica Servers" in the Perforce Knowledge Base:
For more information about the Perforce Broker, see “The Perforce Broker”.
Upgrading replica servers
It is best practice to upgrade any server instance replicating from a master server first. If replicas are chained together, start at the replica that is furthest downstream from the master, and work upstream towards the master server. Keep downstream replicas stopped until the server immediately upstream is upgraded.
There has been a significant change in release 2013.3 that affects how
metadata is stored in
db.* files; despite this change, the database
schema and the format of the checkpoint and the journal files between
2013.2 and 2013.3, remains unchanged.
Consequently, in this one case (of upgrades between 2013.2 and 2013.3), it is sufficient to stop the replica until the master is upgraded, but the replica (and any replicas downstream of it) must be upgraded to at least 2013.2 before a 2013.3 master is restarted.
When upgrading between 2013.2 (or lower) and 2013.3 (or higher), it is
recommended to wait for all archive transfers to end before shutting
down the replica and commencing the upgrade. You must manually delete
rdb.lbr file in the replica server’s root before restarting the
For more information, see "Upgrading Replica Servers" in the Perforce Knowledge Base:
Configuring a forwarding replica
A forwarding replica offers a blend of the functionality of the Perforce Proxy with the improved performance of a replica. The following considerations are relevant:
The Perforce Proxy is easier to configure and maintain, but caches only file content; it holds no metadata. A forwarding replica caches both file content and metadata, and can therefore process many commands without requesting additional data from the master server. This behavior enables a forwarding replica to offload more tasks from the master server and provides improved performance. The trade-off is that a forwarding replica requires a higher level of machine provisioning and administrative considerations compared to a proxy.
A read-only replica rejects commands that update metadata; a forwarding replica does not reject such commands, but forwards them to the master server for processing, and then waits for the metadata update to be processed by the master server and returned to the forwarding replica. Although users connected to the forwarding replica cannot write to the replica’s metadata, they nevertheless receive a consistent view of the database.
If you are auditing server activity, each of your forwarding replica
servers must have its own
P4AUDIT log configured.
Configuring the master server
The following example assumes an environment with a regular server named
master, and a forwarding replica server named
fwd-replica on a host
- Start by configuring a read-only replica for warm standby; see
Configuring a read-only replica for details. (Instead of
Replica1, use the name
On the master server, configure the forwarding replica as follows:
p4 server fwd-1667
The following form is displayed:
ServerID: fwd-1667 Name: fwd-replica Type: server Services: forwarding-replica Address: tcp:forward:1667 Description: Forwarding replica pointing to master:1666
Configuring the forwarding replica
On the replica machine, assign the replica server a serverID:
p4 serverid fwd-1667
When the replica server with the
fwd-1667(which was previously assigned the
fwd-replica) pulls its configuration from the master server, it will behave as a forwarding replica.
On the replica machine, restart the replica server:
p4 admin restart
Configuring a build farm server
Continuous integration and other similar development processes can impose a significant workload on your Perforce infrastructure. Automated build processes frequently access the Perforce server to monitor recent changes and retrieve updated source files; their client workspace definitions and associated have lists also occupy storage and memory on the server. With a build farm server, you can offload the workload of the automated build processes to a separate machine, and ensure that your main Perforce server’s resources are available to your users for their normal day-to-day tasks.
Build farm servers were implemented in Perforce server release 2012.1. With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server. As discussed in “Commit-edge Architecture”, edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.
A Perforce Server intended for use as a build farm must, by definition:
- Permit the creation and configuration of client workspaces
- Permit those workspaces to be synced
One issue with implementing a build farm rather than a read-only replica is that under Perforce, both of those operations involve writes to metadata: in order to use a client workspace in a build environment, the workspace must contain some information (even if nothing more than the client workspace root) specific to the build environment, and in order for a build tool to efficiently sync a client workspace, a build server must be able to keep some record of which files have already been synced.
To address these issues, build farm replicas host their own local copies
of certain metadata: in addition to the Perforce commands supported in a
read-only replica environment, build farm replicas support the
p4 sync commands when applied to workspaces that
are bound to that replica.
If you are auditing server activity, each of your build farm replica
servers must have its own
P4AUDIT log configured.
Configuring the master server
The following example assumes an environment with a regular server named
master, and a build farm replica server named
buildfarm1 on a host
- Start by configuring a read-only replica for warm standby; see
Configuring a read-only replica for details. (That is, create a read-only replica
On the master server, configure the master server as follows:
p4 server master-1666
The following form is displayed:
# A Perforce Server Specification. # # ServerID: The server identifier. # Type: The server type: server/broker/proxy. # Name: The P4NAME used by this server (optional). # Address: The P4PORT used by this server (optional). # Description: A short description of the server (optional). # Services: Services provided by this server, one of: # standard: standard Perforce server # replica: read-only replica server # broker: p4broker process # proxy: p4p caching proxy # commit-server: central server in a distributed installation # edge-server: node in a distributed installation # forwarding-replica: replica which forwards update commands # build-server: replica which supports build automation # P4AUTH: server which provides central authentication # P4CHANGE: server which provides central change numbers # # Use 'p4 help server' to see more about server ids and services. ServerID: master-1666 Name: master-1666 Type: server Services: standard Address: tcp:master:1666 Description: Master server - regular development work
Create the master server’s
server.idfile. On the master server, run the following command:
p4 -p master:1666 serverid master-1666
Restart the master server.
On startup, the master server reads its server ID of
server.idfile. It takes on the
masterand uses the configurables that apply to a
Configuring the build farm replica
On the master server, configure the build farm replica server as follows:
p4 server builder-1667
The following form is displayed:
ServerID: builder-1667 Name: builder-1667 Type: server Services: build-server Address: tcp:builder:1667 Description: Build farm - bind workspaces to builder-1667 and use a port of tcp:builder:1667
Create the build farm replica server’s
server.idfile. On the replica server (not the master server), run the following command
p4 -p builder:1667 serverid builder-1667
Restart the replica server.
On startup, the replica build farm server reads its server ID of
Because the server registry is automatically replicated from the master server to all replica servers, the restarted build farm server takes on the
buildfarm1and uses the configurables that apply to a
In this example, the build farm server also acknowledges the
build-serversetting in the
Services:field of its
Binding workspaces to the build farm replica
At this point, there should be two servers in operation: a master server
master, with a server ID of
master-1666, and a
build-server replica named
buildfarm1, with a server ID of
Bind client workspaces to the build farm server.
Because this server is configured to offer the
build-serverservice, it maintains its own local copy of the list of client workspaces (
db.view.rp) and their respective have lists (
On the replica server, create a client workspace with
p4 -c build0001 -p builder:1667 client build0001
When creating a new workspace on the build farm replica, you must ensure that your current client workspace has a ServerID that matches that required by
builder:1667. Because workspace
build0001does not yet exist, you must manually specify
build0001as the current client workspace with the
-c clientnameoption and simultaneously supply
build0001as the argument to the
p4 clientcommand. For more information, see:
p4 clientform appears, set the
Sync the bound workspace
Because the client workspace
build0001is bound to
builder-1667, users on the master server are unaffected, but users on the build farm server are not only able to edit its specification, they are able to sync it:
The replica’s have list is updated, and does not propagate back to the master. Users of the master server are unaffected.
In a real-world scenario, your organization’s build engineers would
re-configure your site’s build system to use the new server by resetting
P4PORT to point directly at the build farm server. Even in an
environment in which continuous integration and automated build tools
create a client workspace (and sync it) for every change submitted to
the master server, performance on the master would be unaffected.
In a real-world scenario, performance on the master would likely improve for all users, as the number of read and write operations on the master server’s database would be substantially reduced.
If there are database tables that you know your build farm replica does
not require, consider using the
-T filter options to
pull. Also consider specifying the
ClientDataFilter: fields of the replica’s
p4 server form.
If your automation load should exceed the capacity of a single machine, you can configure additional build farm servers. There is no limit to the number of build farm servers that you may operate in your installation.
Filtering metadata during replication
As part of an HA/DR solution, one typically wants to ensure that all the metadata and all the versioned files are replicated. In most other use cases, particularly build farms and/or forwarding replicas, this leads to a great deal of redundant data being transferred.
It is often advantageous to configure your replica servers to filter in (or out) data on client workspaces and file revisions. For example, developers working on one project at a remote site do not typically need to know the state of every client workspace at other offices where other projects are being developed, and build farms don’t require access to the endless stream of changes to office documents and spreadsheets associated with a typical large enterprise.
The simplest way to filter metadata is by using the
tableexcludelist option with
p4 pull command. If you know,
for example, that a build farm has no need to refer to any of your
users' have lists or the state of their client workspaces, you can
db.working entirely with
p4 pull -T
Excluding entire database tables is a coarse-grained method of managing the amount of data passed between servers, requires some knowledge of which tables are most likely to be referred to during Perforce command operations, and furthermore, offers no means of control over which versioned files are replicated.
You can gain much more fine-grained control over what data is replicated
by using the
ArchiveDataFilter: fields of the
p4 server form. These options
enable you to replicate (or exclude from replication) those portions of
your server’s metadata and versioned files that are of interest at the
Example 1. Filtering out client workspace data and files.
If workspaces for users in each of three sites are named with
site-ws-username, a replica intended to act as partial backup
for users at
site1 could be configured as follows:
ServerID: site1-1668 Name: site1-1668 Type: server Services: replica Address: tcp:site1bak:1668 Description: Replicate all client workspace data, except the states of workspaces of users at sites 2 and 3. Automatically replicate .c files in anticipation of user requests. Do not replicate .mp4 video files, which tend to be large and impose high bandwidth costs. ClientDataFilter: -//site2-ws-* -//site3-ws-* RevisionDataFilter: ArchiveDataFilter: //....c -//....mp4
When you start the replica, your
p4 pull metadata thread must
specify the ServerID associated with the server spec that holds the
p4 configure set "site1-1668#startup.1=pull -i 30 -P site1-1668"
In this configuration, only those portions of
db.have that are
site1 are replicated; all metadata concerning
workspaces associated with
site3 is ignored.
All file-related metadata is replicated. All files in the depot are
replicated, except for those ending in
.mp4. Files ending in
transferred automatically to the replica when submitted.
To further illustrate the concept, consider a build farm replica scenario. The ongoing work of the organization (whether it be code, business documents, or the latest video commercial) can be stored anywhere in the depot, but this build farm is dedicated to building releasable products, and has no need to have the rest of the organization’s output at its immediate disposal:
Example 2. Replicating metadata and file contents for a subset of a depot.
Releasable code is placed into
//depot/releases/... and automated
builds are based on these changes. Changes to other portions of the depot,
as well as the states of individual workers' client workspaces, are
ServerID: builder-1669 Name: builder-1669 Type: server Services: build-server Address: tcp:built:1669 Description: Exclude all client workspace data Replicate only revisions in release branches ClientDataFilter: -//... RevisionDataFilter: -//... //depot/releases/... ArchiveDataFilter: -//... //depot/releases/...
To seed the replica you can use a command like the following to create a filtered checkpoint:
p4d -r /p4/master -P builder-1669 -jd myCheckpoint
The filters specified for
builder-1669 are used in creating the
checkpoint. You can then continue to update the replica using the
When you start the replica, your
p4 pull metadata thread must
specify the ServerID associated with the server spec that holds the
p4 configure set "builder-1669#startup.1=pull -i 30 -P builder-1669"
p4 pull thread that pulls metadata for replication filters
out all client workspace data (including the have lists) of all users.
p4 pull -u thread(s) ignore all changes on the master except
those that affect revisions in the
//depot/releases/... branch, which
are the only ones of interest to a build farm. The only metadata that is
available is that which concerns released code. All released code is
automatically transferred to the build farm before any requests are made,
so that when the build farm performs a
p4 sync, the sync is
Verifying replica integrity
Tools to ensure data integrity in multi-server installations are
accessed through the
p4 journaldbchecksums command, and their
behavior is controlled by three configurables:
When you run
p4 journaldbchecksums against a specific database
table (or the set of tables associated with one of the levels predefined
rpl.checksum.auto configurable), the upstream server writes a
journal note containing table checksum information. Downstream replicas,
upon receiving this journal note, then proceed to verify these checksums
and record their results in the structured log for integrity-related
These checks are also performed whenever the journal is rotated. In addition, newly defined triggers allow you to take some custom action when journals are rotated. For more information, see the section "Triggering on journal rotation" in Helix Versioning Engine Administrator Guide: Fundamentals.
Administrators who have one or more replica servers deployed should
enable structured logging for integrity events, set the
rpl.checksum.* configurables for their replica
servers, and regularly monitor the logs for integrity events.
Structured server logging must be enabled on every server, with at least
one log recording events of type
integrity, for example:
p4 configure set serverlog.file.8=integrity.csv
After you have enabled structured server logging, set the
configurables to the desired levels of integrity checking. Best practice
for most sites is a balance between performance and log size:
p4 configure set rpl.checksum.auto=1 (or
2 for additional
verification that is unlikely to vary between an upstream server and its
p4 configure set rpl.checksum.change=2 (this setting checks the
integrity of every changelist, but only writes to the log if there is an
p4 configure set rpl.checksum.table=1 (this setting instructs
replicas to verify table integrity on scan or unload operations, but
only writes to the log if there is an error.)
Valid settings for
||Database tables checked with every journal rotation|
No checksums are performed.
Verify only the most important system and revision tables:
Verify all database tables from level 1, plus:
Verify all metadata, including metadata that is likely to differ, especially when comparing an upstream server with a build-farm or edge-server replica.
Valid settings for
||Verification performed with each changelist|
Perform no verification.
Write a journal note when a
Replica verifies changelist summary, and writes to
Replica verifies changelist summary, and writes to integrity log even when the changelist does match.
Valid settings for
||Level of table verification performed|
Table-level checksumming only.
When a table is unloaded or scanned, journal notes are written. These
notes are processed by the replica and are logged to
When a table is unloaded or scanned, journal notes are written, and the results of journal note processing are logged even if the results match.
For more information, see
p4 help journaldbchecksums.
Warnings, notes, and limitations
The following warnings, notes, and limitations apply to all configurations unless otherwise noted.
On master servers, do not reconfigure these replica settings while the replica is running:
- Be careful not to inadvertently write to the replica’s database. This might
happen by using an
-roption without specifying the full path (and mistakingly specifying the current path), by removing db files in
P4ROOT, and so on. For example, when using the
p4d -r . -jccommand, make sure you are not currently in the root directory of the replica or standby in which
p4 journalcopyis writing journal files.
Large numbers of
Perforce password (P4PASSWD) invalid or unseterrors in the replica log indicate that the service user has not been logged in or that the
P4TICKETSfile is not writable.
In the case of a read-only directory or
p4 loginappears to succeed, but
p4 login -sgenerates the "invalid or unset" error. Ensure that the
P4TICKETSfile exists and is writable by the replica server.
- Client workspaces on the master and replica servers cannot overlap. Users must
be certain that their
P4CLIENT, and other settings are configured to ensure that files from the replica server are not synced to client workspaces used with the master server, and vice versa.
Replica servers maintain a separate table of users for each replica; by default, the
p4 userscommand shows only users who have used that particular replica server. (To see the master server’s list of users, use
p4 users -c).
The advantage of having a separate user table (stored on the replica in
db.user.rp) is that after having logged in for the first time, users can continue to use the replica without having to repeatedly contact the master server.
All server IDs must be unique. The examples in the section Configuring a build farm server illustrate the use of manually-assigned names that are easy to remember, but in very large environments, there may be more servers in a build farm than is practical to administer or remember. Use the command
p4 server -gto create a new server specification with a numeric Server ID. Such a Server ID is guaranteed to be unique.
Whether manually-named or automatically-generated, it is the responsibility of the system administrator to ensure that the Server ID associated with a server’s
p4 serverform corresponds exactly with the
server.idfile created (and/or read) by the
- Users of P4V and forwarding replicas are urged to upgrade to P4V 2012.1 or higher. Perforce applications older than 2012.1 that attempt to use a forwarding replica can, under certain circumstances, require the user to log in twice to obtain two tickets: one for the first read (from the forwarding replica), and a separate ticket for the first write attempt (forwarded to the master) requires a separate ticket. This confusing behavior is resolved if P4V 2012.1 or higher is used.
- Although replicas can be chained together as of Release 2013.1, (that
is, a replica’s
P4TARGETcan be another replica, as well as from a central server), it is the administrator’s responsibility to ensure that no loops are inadvertently created in this process. Certain multi-level replication scenarios are permissible, but pointless; for example, a forwarding replica of a read-only replica offers no advantage because the read-only replica will merely reject all writes forwarded to it. Please contact Perforce technical support for guidance if you are considering a multi-level replica installation.
rpl.compressconfigurable controls whether compression is used on the master-replica connection(s). This configurable defaults to
0. Enabling compression can provide notable performance improvements, particularly when the master and replica servers are separated by significant geographic distances.
Enable compression with:
p4 configure set fwd-replica#rpl.compress=1