Previous Table of Contents Index Next
Perforce 2010.2: System Administrator's Guide



Chapter 10
What is Replication?
Replication is the duplication of server data from one Perforce Server to another Perforce Server, ideally in real time. You can use replication to:
A replica server can function as an up-to-date warm standby system, to be used if the master server fails. Such a replica server requires that both server metadata and versioned files are replicated.
Long-running queries and reports, builds, and checkpoints can be run against a replica server, reducing lock contention. For checkpoints and some reporting tasks, only metadata needs to be replicated. For reporting and builds, replica servers will need access to both metadata and versioned files.
Perforce server version 2010.1 introduced support for the replication of metadata, but did not support the automated replication of versioned file data. Starting with the 2010.2 release, the Perforce Server now supports several new commands to simplify configuration and replication between Perforce servers of both metadata and versioned files.
Combined with a centralized authorization server (see Centralized authorization server), Perforce administrators can configure the Perforce Broker (see Chapter 11, The Perforce Broker) to redirect commands to read-only replica servers to balance load efficiently across an arbitrary number of replica servers.
Replication is unidrectional, and replica servers are intended for read-only purposes. Bidirectional replication is not supported, because any changes made to a replica server can be overwritten by changes made to the master Perforce server. If you require read/write access to a remote server, use the Perforce Proxy. See Perforce Proxy for details.
System Requirements
To use p4 pull, the master and replica servers must be revision 2010.2 or higher.
To use p4 replicate, the master and replica servers must be revision 2009.2 or higher.
p4 replicate and p4 pull (when replicating metadata) do not read compressed journals. Therefore, the master server must not compress rotated journals until the replica server has fetched all journal records from older journals.
On UNIX, the time zone setting is controlled by the TZ environment variable at the time the replica server is started.
New commands and concepts
Release 2010.2 of Perforce introduces many new concepts, features, and commands intended for distributed and replicated environments. Among these are:
Because p4 configure stores its data on the master server, all replica servers automatically pick up any changes you make.
P4NAME
p4d -In name
When you use p4 configure on your master server, you can specify different sets of configurables for each named server. Each named server, upon startup, refers to its own set of configurables, and ignores configurables set for other servers.
p4d -u svcuser
A new type of user intended for authentication of server-to-server communications. Service users have extremely limited access to the depot and do not consume Perforce licenses.
To make logs easier to read, create one service user on your master server for each replica or proxy in your network of Perforce Servers.
p4d -M readonly
db.replication
Replica servers can be configured to automatically reject user commands that attempt to modify metadata (db.* files).
In -M readonly mode, the Perforce Server denies any command that attempts to write to server metadata. In this mode, a command such as p4 sync (which updates the server's have list) is rejected, but p4 sync -p (which populates a client workspace without updating the server's have list) is accepted.
p4d -D readonly
p4d -D none
lbr.replication
Replica servers can be configured to automatically reject user commands that attempt to modify archived depot files (the "library").
In -D readonly mode, the Perforce Server accepts commands that read depot files, but denies commands that write to them. In this mode, p4 describe can display the diffs associated with a changelist, but p4 submit is rejected.
In -D none mode, the Perforce Server denies any command that accesses the versioned files that make up the depot. In this mode, a command such as p4 describe changenum is rejected because the diffs displayed with a changelist require access to the versioned files, but p4 describe -s changenum (which describes a changelist without referring to the depot files in order to generate a set of diffs) is accepted.
As with the Perforce Proxy, you can use P4TARGET to specify the master server to which a replica server points when retrieving its data.
You can set P4TARGET explicitly, or you can use p4 configure to set a P4TARGET for each named replica server.
A replica server with P4TARGET set must have both the -M and -D flags, or their equivalent db.replication and lbr.replication configurables, correctly specified.
Use the startup.n (where n is an integer) configurable to automatically spawn multiple p4 pull processes on startup.
The p4 pull command
Perforce's p4 pull command provides the most general solution for replication. You can use p4 pull to configure a replica server that:
replicates versioned files (the ,v files that contain the deltas that are produced when new versions are submitted) unidirectionally from a master server.
replicates server metadata (the information contained in the db.* files) unidirectionally from a master server.
uses the startup.n configurable to automatically spawn as many p4 pull processes as required. A common configuration for a warm standby server is one in which one p4 pull process is spawned to replicate the master server's metadata, and multiple p4 pull processes are spawned to run in parallel, and continually update the replica's copy of the master server's versioned files.
Although you can run p4 pull from the command line for testing and debugging purposes, it's most useful when controlled by the startup.n configurables, and in conjunction with named servers, service users, and centrally-managed configurations.
The p4 replicate command
Metadata replication with p4 replicate works by using the same data as Perforce's backup and restore features. The p4 replicate command maintains transactional integrity while reading transactions out of an originating server and copying them into a replica server's database.
Unlike p4 pull, the p4 replicate command is capable of replicating only server metadata. While the p4 pull command, when applied to metadata, replicates all server metadata, p4 replicate, is capable of filtering master server's metadata on a table-by-table basis with the -T flag. If you need to use p4 replicate's filtering capabilities but also require access to the versioned files, you must provide your own independent mechanism (for example, p4 pull -u, or rsync) by which the replica can access the versioned files.
Replica servers are supplied with metadata from a master server by means of the p4 replicate command. The p4 replicate command polls the master server for new journal entries and outputs them on standard output or pipes the journal entries to a subprocess (such as p4d -jrc) specified on the p4 replicate command line. When p4 replicate is running, start a p4d for the replica Perforce Server that points to the database into which the journal records are being restored. Users can connect to the replica server the same way they connect to the master server.
To start a replica server, you must first run p4 replicate:
p4 replicate [replicate-flags] command [command-flags]
and then start the replica server:
p4d -u serviceuser -M readonly -D none -r replicaroot -p replicaport
In most cases, the command supplied to p4 replicate is a variation of p4d -jr, which reads the journal records from the master server into a set of database files used by the replica server.
Always specify the replica server's root by using the -r command line flag. The command line flags override any P4ROOT variable setting, reducing the risk of inadvertently starting a replica server that points to your master server's db.* files.
For further protection, run your replica server as a different userid (as defined by the operating system, not by Perforce) than the one used to run your master server. The userid that owns the replica server's process must not have write privileges to any of the directories managed by the master server.
p4 replicate flags
A typical invocation of p4 replicate looks like this:
p4 replicate -s statefile -i interval [-k -x -R] [-J prefix] [-T tables] command
The following table describes commonly-used flags for the replicate command. For a complete list of flags, see the Perforce Command Reference.
-i interval
Specify the polling interval, in seconds. The default is two seconds. To disable polling (that is, to check once for updated journal entries and then exit), specify an interval of 0.
-J prefix
Keep the pipe to the specified command subprocess open between polling intervals.
By default, p4 replicate shuts the pipe down between polling intervals. If you are using -k, you must use the -jrc option to check consistency. If you are not using -jrc, do not use -k to keep the connection open. (For details, see p4 replicate journal-processing commands)
-s statefile
Specify the name of the state file. The state file is a one-line text file that determines where subsequent invocations of p4 replicate start reading data. The format is journalno/byteoffset, where journalno is the number of the most recent journal, and byteoffset is the number of bytes to skip before reading. If no byte offset is specified in the file, replication starts at the beginning of the specified journal file. If no state file exists, replication starts with the first available journal, and a state file is created.
Configures p4 replicate to exit when journal rotation is detected This option is typically used in offline checkpointing configurations. By default, p4 replicate continues to poll the master server until it is stopped by the user.
A command to process the journal records; see p4 replicate journal-processing commands.
p4 replicate journal-processing commands
A typical command supplied to p4 replicate looks like this:
p4d -r replicaroot -f -b 1 -jrc -
This invocation of p4d makes use of three replication-specific flags to p4d:
The -jrc flag instructs p4d to check for consistency when reading journal records. Batches of journal records are read in increasing size until p4d processes a marker which indicates that all transactions are complete. After the marker is processed, the affected database tables are locked, the changes are applied, and the tables are unlocked. Because the server always remains in a state of transactional integrity, it is possible for other users to use the replica server while the journal transactions are being applied from the master server.
The -b 1 flag refers to bunching journal records, sorting them, and removing duplicates before updating the database. The default is 5000 records per update, but in the case of replication, serial processing of journal records (a bunch size of 1) is required; hence, each line is read individually.
The -f flag supplied to p4d in conjunction with the -jr flag forces p4d to ignore failures to delete records. This flag is required for certain replication configurations because some tables on the replica server (depending on use cases) will differ from those on the master server.
Server names
To set a Perforce server name, set the P4NAME environment variable or specify the -In command line flag to p4d when you start the server. Assigning names to servers is essential for configuring replication. Assigning server names permits most of the server configuration data to be stored in Perforce itself, as an alternative to using startup flags or environment values to specify configuration details. In replicated environments, named servers are a necessity, because p4 configure settings are replicated from the master server along with other Perforce metadata.
For example, if you start your master server as follows:
p4d -r /p4/master -In master -p central:11111
And your replica server as follows:
p4d -r /p4/replica -In Replica1 -p replica:22222
You can use p4 configure on the master to control settings on both the master and the replica, because configuration settings are part of a Perforce server's metadata and are replicated accordingly.
For example, if you issue following commands on the master server:
p4 -p master:11111 configure set master#monitor=2
p4 -p master:11111 configure set Replica1#monitor=1
After the configuration data has been replicated, the two servers have different server monitoring levels. That is, if you run p4 monitor show against master:11111, you see both active and idle processes, because for the server named master, the monitor configurable is set to 2. If you run p4 monitor show against replica:22222, only active processes are shown, because for the Replica1 server, monitor is 1.
Service users
A standard user is a traditional user record used for human or automated system access. A service user is used for server-to-server authentication, as part of the replication process. Creating a service user for each master, replica, or proxy server greatly simplifies the task of interpreting your server logs. Service users can also help you improve security, by requiring your replica servers to have valid login tickets before they can communicate with the master server. Service users do not consume Perforce licenses.
A service user can run only the following commands:
To create a service user, run the command:
p4 user -f service1
The standard user form is displayed. Enter a new line to set the new user's Type: to be service; for example::
User:      service1
Email:     services@example.com
FullName:  Service User for Replica Server #1
Type:      service
By default, the output of p4 users omits service users. To include service users, run p4 users -a.
Tickets and timeouts for service users
A newly-created service user that is not a member of any groups is subject to the default ticket timeout of 12 hours. To avoid issues that arise when a service user's ticket ceases to be valid, create a group for your service users that features an extremely long timeout. On the master server, issue the following command:
p4 group service_users
Add service1 to the list of Users: in the group, and set the Timeout: to a large value (in this example, approximately 63 years)
Group:            service_users
Timeout:          2000000000
PasswordTimeout:  unset
Subgroups:
Owners:
Users:
        service1
Permissions for service users
On the master server, use p4 protect to grant the service user super permission. Service users are tightly restricted in the commands they can run, so granting them super permission is safe.
Server flags to control metadata and depot access
When you start a replica that points to a master server with P4TARGET, you must specify both the -M (metadata access) and a -D (depot access) flags, or set the configurables db.replication (access to metadata) and lbr.replication (access the depot's library of versioned files) to control which Perforce client commands are permitted or rejected by the replica server.
P4TARGET
Set P4TARGET to the the fully-qualified domain name or IP address of the master server from which a replica server is to retrieve its data. You can set P4TARGET explicitly, specify it on the p4d command line with the -t host:port flag, or you can use p4 configure to set a P4TARGET for each named replica server. .
If you specify a target, p4d examines its configuration for startup.n commands: if no valid p4 pull commands are found, p4d runs and waits for the user to manually start a p4 pull command. If you omit a target, p4d assumes the existence of an external metadata replication source such as p4 replicate.
Server startup commands
You can configure a Perforce Server to automatically run commands at startup using the p4 configure as follows:
p4 configure set "servername#startup.n=command'
Where n represents the order in which the commands are executed: the command specified for startup.1 runs first, then the command for startup.2, and so on. The only valid startup command is p4 pull.
Uses for replication
Here are a few situations in which replica servers can be useful.
For a failover or warm standby server, replicate both server metadata and versioned files by running two p4 pull commands in parallel. Each replica server requires one or more p4 pull -u instances to replicate versioned files, and a single p4 pull to replicate the metadata.
If you are using p4 pull for both metadata and p4 pull -u for versioned files, start your replica server with p4d -t host:port -Mreadonly -Dreadonly. Commands that require read-only access to server metadata and to depot files will succeed. Commands that attempt to write to server metadata and/or depot files will fail gracefully.
For a detailed example of this configuration, see Configuring a Warm Standby Server.
p4 replicate only replicates metadata. If you are using p4 replicate and also require the replication of (or access to) archive files, the archive files need to be made available to the replica Perforce Server by independent means.
Use a network mounted file system, utilities such as rsync or SAN replication, and start your replica server with p4d -Mreadonly -Dreadonly, or use p4 pull -u to obtain the depot files, and start your replica with p4d -t host:port -Mreadonly -Dreadonly.
To configure an offline checkpointing or reporting server, only the master server's metadata needs to be replicated; versioned files do not need to be replicated.
If you are using p4 replicate for metadata-only replication, start the replica server with p4d -Mreadonly -Dnone. Omit the target. See Configuring a Reporting Replica.
If you are using p4 pull for metadata-only replication (that is, if you have no p4 pull -u commands configured to replicate depot contents), start the server with p4d -t host:port -Mreadonly -Dnone. You must specify a target.
In either scenario, commands that require read-only access to server metadata will succeed and commands that attempt to write to server metadata or attempt to access depot files will be blocked by the replica server.
Configuring a Reporting Replica
Offloading reporting and checkpointing tasks
This example illustrates the use of p4 replicate to replicate metadata between a master server and a replica server. The replica server will hold a replica of the master server's metadata, but none of the versioned files. This configuration is useful for offloading server-intensive report generation, and for performing offline checkpoints; more generalized scenarios require replicas based on the p4 pull command.
1.
2.
You could also use p4 -p master:1666 admin checkpoint.
3.
When the checkpoint is complete, copy the checkpoint file (checkpoint.nnn) to the replica server's server root and note the checkpoint sequence number (nnn).
4.
This saves time by creating an initial set of db.* files for the replica server to use; from this point forward, transfer of data from the master will be performed by p4 replicate.
5.
Install a license file in the P4ROOT directory on the replica server. Contact Perforce Technical Support to obtain a duplicate of your master server license file.
6.
p4 replicate keeps track of its state (the most recent checkpoint sequence number read, and a byte offset for subsequent runs) in a statefile.
The first time you run p4 replicate, you must set the checkpoint sequence number in the statefile as follows:
replica2$ echo nnn > state
7.
The specified user for the p4 replicate command requires super access on the master server (in this case, master:1666).
The default polling interval is 2 seconds. (In this example, we have used -i 10 to specify a polling interval of 10 seconds.)
8.
The replica server's metadata is now being updated by p4 replicate; you can now start the replica server itself:
Users should now be able to connect to the replica server on replica2:6661 and run basic reporting commands (p4 jobs, p4 filelog, and so on) against it.
The -M readonly flag ensures that commands that read metadata are accepted, but blocks commands that write to metadata. Because this server has no versioned files, commands that access depot files are blocked by -D none.
Using a metadata-only replica server
The replica2 server can be stopped, and checkpoints can be performed against it, just as they would be with the master server. The advantage is that the checkpointing process can now take place without any downtime on the master server.
Users connect to replica servers by setting P4PORT as they would with any other server.
Commands that require access to versioned file data (p4 sync, for example) fail, because this configuration replicates only the metadata but not the versioned files in the depot.
To learn more about p4 replicate, see "Perforce Metadata Replication" in the Perforce Knowledge Base:
http://kb.perforce.com/article/1099
 
Configuring a Warm Standby Server
To support warm standby servers, a replica server requires an up-to-date copy of both the master server's metadata and its versioned files.
Disaster recovery and failover strategies are complex and site-specific. Perforce Consultants are available to assist organizations in the planning and deployment of disaster recovery and failover strategies. For details, see:
The following extended example configures a replica as a warm standby server for an existing Perforce Server with some data in it. For this example, assume that:
Your master server is named Master and is running on a host called master, using port 11111, and its server root directory is/p4/master
Your replica server will be named Replica1 and will be configured to run on a host machine named replica, using port 22222, and its root directory will be /p4/replica.
You cannot define P4NAME using the p4 configure command, because a server must know its own name to use values set by p4 configure.
You cannot define P4ROOT using the p4 configure command, to avoid the risk of specifying an incorrect server root.
Master Server Setup
To define the behavior of the replica, you enter configuration information into the master server's db.config file using the p4 configure set command. Configure the master server first; its settings will be replicated to the replica later.
To configure the master, log in to Perforce as a superuser and perform the following steps:
1.
To set the server named Replica1 to use master:11111 as the master server to pull metadata and versioned files, issue the command:
p4 -p master:11111 configure set Replica1#P4TARGET=master:11111
Perforce displays the following response:
For server 'Replica1', configuration variable 'P4TARGET' set to 'master:11111'
To avoid confusion when working with multiple servers that appear identical in many ways, use the -u flag to specify the superuser account and -p to explicitly specify the master Perforce server's host and port.
These flags have been omitted from this example for simplicity. In a production environment, specify the host and port on the command line.
2.
Set the Replica1 server to save the replica server's log file using a specified file name. Keeping the log names unique prevents problems when collecting data for debugging or performance tracking purposes.
p4 configure set Replica1#P4LOG=replica1Log.txt
3.
Set the Replica1 server configurable to 1, which is equivalent to specifying the
"-vserver=1" server startup flag:
p4 configure set Replica1#server=1
4.
p4 configure set Replica1#monitor=1
5.
To handle the Replica1 replication process, configure the following three startup.n commands. (When passing multiple items separated by spaces, you must wrap the entire set value in double quotes.)
The first startup process sets p4 pull to poll once every second for journal data only:
p4 configure set "Replica1#startup.1=pull -i 1"
The next two settings configure the server to spawn two p4 pull threads at startup, each of which polls once per second for archive data transfers.
p4 configure set "Replica1#startup.2=pull -u -i 1"
p4 configure set "Replica1#startup.3=pull -u -i 1"
Each p4 pull -u command creates a separate thread for replicating archive data. Heavily-loaded servers might require more threads, if archive data transfer begins to lag behind the replication of metadata. To determine if you need more p4 pull -u processes, read the contents of the rdb.lbr table, which records the archive data transferred from the master Perforce server to the replica. To display the contents of this table when a replica is running, run:
p4 pull -l
on the replica server.
If rdb.lbr indicates a large number of pending transfers (that is, many rows of records), consider adding more "p4 pull -u" startup.n commands to address the problem.
6.
Set the db.replication (metadata access) and lbr.replication (depot file access) configurables to readonly:
p4 configure set Replica1#db.replication=readonly
p4 configure set Replica1#lbr.replication=readonly
Because this replica server is intended as a warm standby (failover) server, both the master server's metadata and its library of versioned depot files are being replicated. When the replica is running, users of the replica will be able to run commands that access both metadata and the server's library of depot files.
7.
p4 user -f service
The user specification for the service user opens in your default editor. Add the following line to the user specification:
Type: service
Save the user specification and exit your default editor.
By default, the service user is granted the same 12-hour login timeout as standard users. To prevent the service user's ticket from timing out, create a group with a long timeout on the master server. In this example, the Timeout: field is set to two billion seconds, approximately 63 years:
p4 group service_group
Users: service
Timeout: 2000000000
For more details, seeTickets and timeouts for service users.
8.
Set the service user protections to super in your protections table. (See Permissions for service users.) Set the security level of all your Perforce Servers to at least 1 (ideally, to 3), and set a strong password for the service user.
p4 configure set security=3
p4 passwd
9.
Set the Replica1 configurable for the serviceUser to service.
p4 configure set Replica1#serviceUser=service
This step configures the replica server to authenticate itself to the master server as the service user; this is equivalent to starting p4d with the -u service flag.
10.
If the user running the replica server does not have a home directory, or if the directory where the default .p4tickets file is typically stored is not writable by the replica's Perforce server process, set the replica P4TICKETS value to point to a writable ticket file in the replica's Perforce server root directory:
p4 configure set "Replica1#P4TICKETS=p4/replica"
Creating the replica
To configure and start a replica server, perform the following steps:
1.
p4 admin checkpoint
(For a new setup, we can assume the checkpoint file is named checkpoint.1)
2.
Move the checkpoint to the replica server's P4ROOT directory and replay the checkpoint:
p4d -r /p4/replica -jr $P4ROOT/checkpoint.1
3.
Versioned files include both text (in RCS format, ending with ",v") and binary files (directories of individual binary files, each directory ending with ",d"). Ensure that you copy the text files in a manner that correctly translates line endings for the replica host's filesystem.
If your depots are specified using absolute paths on the master, use the same paths on the replica. (Or use relative paths in the Map: field for each depot, so that versioned files are stored relative to the server's root.)
4.
Contact Perforce Technical Support to obtain a duplicate of your master server license file. Copy the license file for the replica server to the replica server root directory.
5.
p4 -u service -p master:11111 login
Then move the ticket to the location that holds the P4TICKETS file for the replica server's service user.
At this point, your replica server is configured to contact the master server and start replication. Specifically:
A service user (service) in a group (service_users) with a long ticket timeout
A replicated copy of the master server's db.config, holding the following preconfigured settings applicable to any server named Replica1:
A specified service user (named service), which is equivalent to specifying -u service on the command line
A target server of master:11111, which is equivalent to specifying -t master:11111 on the command line
Both db.replication and lbr.replication set to readonly, which is equivalent to specifying -M readonly -D readonly on the command line
A series of p4 pull commands configured to run when the master server starts
Starting the replica
To name your server Replica1, set P4NAME or specify the -In flag and start the replica as follows:
p4d -r /p4/replica -In Replica1 -p replica:22222 -d
When the replica starts, all of the master server's configuration information is read from the replica's copy of the db.config table (which you copied earlier). The replica then spawns three p4 pull threads: one to poll the master server for metadata, and two to poll the master server for versioned files.
Testing the replica
Testing p4 pull
To confirm that the p4 pull commands (specified in Replica1's startup.n configurations) are running , issue the following command:
p4 -u super -p replica1:22222 monitor show -a
18835 R service00:04:46 pull -i 1
18836 R service00:04:46 pull -u -i 1
18837 R service00:04:46 pull -u -i 1
18926 R super 00:00:00 monitor show -a
If you need to stop replication for any reason, use the p4 monitor terminate command:
p4 -u super -p replica1:22222 monitor terminate 18837
** process '18837' marked for termination **
To restart replication, either restart the Perforce server process, or manually restart the replication command:
p4 -u super -p replica1:22222 pull -u -i 1
If the p4 pull and/or p4 pull -u processes are terminated, read-only commands will continue to work for replica users as long as the replica server's p4d is running,
Testing file replication
Create a new file under your workspace view:
echo "hello world" > myfile
Mark the file for add:
p4 -p master:11111 add myfile
And submit the file:
p4 -p master:11111 submit -d "testing replication"
Wait a few seconds for the pull commands on the replica to run, then check the replica for the replicated file:
p4 -p replica:22222 print //depot/myfile
//depot/myfile#1 - add change 1 (text)
hello world
If a file transfer is interrupted for any reason, and a versioned file is not present when requested by a user, the replica server silently retrieves the file from the master.
Replica servers in -M readonly -D readonly mode will retrieve versioned files from master servers even if started without a p4 pull -u command to replicate versioned files to the replica. Such servers act as "on-demand" replicas.
Administrators: be aware that creating an on-demand replica of this sort can affect server performance or resource consumption, for example, if a user enters a command such as "p4 print //...", which reads every file in the depot.
Verifying the replica
When you copied the versioned files from the master server to the replica server, you relied on the operating system to transfer the files. To determine whether data was corrupted in the process, run p4 verify on the replica server:
p4 verify //...
Any errors that are present on the replica but not on the master indicate corruption of the data in transit or while being written to disk during the original copy operation. (Run p4 verify on a regular basis, because a failover server's storage is just as vulnerable to corruption as a production server.)
Using the replica
You can perform all normal operations against your master server (p4 -p master:11111 command). To reduce the load on the master server, direct reporting (read-only) commands to the replica (p4 -p replica:22222 command). Because the replica is running in -M readonly -D readonly mode, commands that read both metadata and depot file contents are available, and reporting commands (such as p4 annotate, p4 changes, p4 filelog, p4 diff2, p4 jobs, and others) work normally. However, commands that update the server's metadata or depot files are blocked.
Commands that update metadata
Some scenarios are relatively straightforward: consider a command such as p4 sync. A plain p4 sync fails, because whenever you sync your workspace, the Perforce Server must update its metadata (the "have" list, which is stored in the db.have table). Instead, use p4 sync -p to populate a workspace without updating the have list:
p4 -p replica:22222 sync -p //depot/project/...@1234
This operation succeeds because it doesn't update the server's metadata.
Some commands affect metadata in more subtle ways. For example, many Perforce commands update the last-update time that is associated with a specification (for example, a user or client specification). Attempting to use such commands on replica servers produces errors unless you use the -o flag. For example, p4 client (which updates the Update: and Access: fields of the client specification) fails:
p4 -p replica:22222 client replica_client
Replica does not support this command.
However, p4 client -o works:
p4 -p replica:22222 client -o replica_client
(client spec is output to STDOUT)
Techniques such as these enable automated systems (such as continuous build servers) to sync using a read-only replica server.
If you are using a replica server as the foundation of a build farm, the client workspace specifications must be created on the master Perforce server first, so that they can be replicated and available to users of the replica.
If a command is blocked due to an implicit attempt to write to the server's metadata, consider workarounds such as those described above. (Some commands, like p4 submit, always fail, because they attempt to write to the replica server's depot files; these commands are blocked by the -D readonly flag.)
Using the Perforce Broker to redirect commands
You can use the P4Broker with a replica server to redirect read-only commands to replica servers. This approach enables all your users to connect to the same host:port setting (the broker). In this configuration, the broker is configured to transparently redirect key commands to whichever Perforce Server is appropriate to the task at hand.
For an example of such a configuration, see "Using P4Broker With Replica Servers" in the Perforce Knowledge Base:
http://kb.perforce.com/article/1354
For more information about the Perforce Broker, see Chapter 11, The Perforce Broker.
Upgrading Replica Servers
In a replicated environment, the master server and replica server(s) must be at the same release level: whenever you upgrade a master server, you must upgrade all replica servers.
Upgrading a p4 replicate-based replica
Because p4 replicate relies on journal records to transfer metadata, and because it supports filtering of database tables, the process of upgrading a replica server in a p4 replicate environment can be somewhat complex.
To upgrade a server in a p4 replicate environment:
1.
2.
When the replica server has processed the last records from the master server and the system is in a quiescent state, terminate the p4 replicate command.
3.
4.
5.
On the replica server, with journaling disabled (-J off), upgrade the replica server:
6.
By upgrading the replica server first, you guarantee that any journal entries created on the master (during the replica server's upgrade) are in the previous version's format. The upgraded replica is capable of dealing with these entries.
7.
Take a checkpoint of the master server and back up its versioned files. When you take the master server's checkpoint, you must use the same journal prefix that you use in the production environment, to enable the replica server to resume replication correctly.
8.
9.
On the master server, with journaling disabled (-J off), upgrade the master server.
You must checkpoint the master server before upgrading the master server.
Because journaling is temporarily disabled during this step, the only way to recover from any failures that occur during the upgrade process is to restore the checkpoint you took in step 7.
The reason to disable journaling on the master server during the upgrade is that by default, the upgrade process (p4d -xu) creates journal records that reflect the work performed during the upgrade process, and because certain configurations might require that any journal entries created by the upgrade process are not replicated.
10.
11.
If you are using p4 replicate, first run:
and then start the upgraded replica server with -M readonly and -D none:
At this point, the replica server (having completed its upgrade) will receive any new transactions that completed on the master server between the replica server's shutdown and the replica server's restart. Because journaling was disabled during the master server's upgrade, the replica server does not receive redundant journal records pertinent only to the master server's upgrade. Because the replica server was upgraded first, it is able to process records from both the pre-upgrade and post-upgrade master server.
Warnings, Notes and Limitations
Large numbers of "Perforce password (P4PASSWD) invalid or unset" errors in the replica log indicate that the service user has not been logged in or that the P4TICKETS file is not writable.
In the case of a read-only directory or P4TICKETS file, p4 login appears to succeed, but p4 login -s generates the "invalid or unset" error. Ensure that the P4TICKETS file exists and is writable by the replica server.
Client workspaces on the master and replica servers cannot overlap. Users must be certain that their P4PORT, P4CLIENT, and other settings are configured to ensure that files from the replica server are not synced to client workspaces used with the master server, and vice versa.
In a p4 replicate environment, you can prevent submission of changes by setting permissions on the replica's versioned files to read-only (relative to the userid that owns the replica p4d process.) To enforce this requirement, you can use trigger scripts on both the master and the replica server, written to succeed on the master server and fail on the replica server.
In an environment based solely on p4 pull and p4 pull -u, this restriction is automatically enforced: the settings of the -D and -M flags (or the corresponding lbr.replication and db.replication configurables) ensure that replica servers reject commands that attempt to modify the metadata or depot files.
 


Previous Table of Contents Index Next

Perforce 2010.2: System Administrator's Guide
Copyright 1997-2011 Perforce Software.