|
Disk space shortages, hardware failures, and system crashes can corrupt any of the Perforce server's files. That's why the entire Perforce root directory structure - your versioned files and your database - should be backed up regularly.
As mentioned earlier, versioned files are stored in subdirectories beneath your Perforce server root, and can be restored directly from backups without any loss of integrity.
The files making up the Perforce database, on the other hand, may not have been in a state of transactional integrity at the moment they were copied to the system backups. Restoring the db.* files from system backups may result in an inconsistent database. The only way to guarantee the integrity of the database after it's been damaged is to reconstruct the db.* files from Perforce checkpoint and journal files.
- A checkpoint is just a snapshot or copy of the database at a particular moment in time.
- A journal is a log that records updates made to the database since the last snapshot was taken.
The checkpoint file is often much smaller than the original database, and can be made smaller still by compressing it. The journal file, on the other hand, can grow quite large; it is truncated whenever a checkpoint is made, and the older journal is renamed. The older journal files can then be backed up offline, freeing up more space locally.
Both the checkpoint and journal are text files, and have the same format. A checkpoint and, if available, its subsequent journal, can restore the Perforce database.
!Warning!
|
Checkpoints and journals archive only the Perforce database files, not the files in the depot directories! You must always back up the depot files (your versioned files) with the standard OS backup commands after checkpointing.
|
Because the information stored in the Perforce database is as irreplaceable as your versioned files, checkpointing and journaling are an integral part of administering a Perforce server, and should be performed regularly.
Checkpoint files
A checkpoint is a file that contains all information necessary to recreate the metadata in the Perforce database. When you create a checkpoint, the Perforce database is locked, allowing you to take an internally-consistent snapshot of that database.
Versioned files are backed up separately from checkpoints. This means that a checkpoint does not contain the contents of versioned files, and as such, you cannot restore any versioned files from a checkpoint. You can, however, restore all changelists, labels, jobs, etc., from a checkpoint.
To guarantee database integrity upon restoration, the checkpoint must be as old as, or older than, the versioned files in the depot. This means that the database should be checkpointed, and the checkpoint generation must be complete, before the backup of the versioned files starts.
Regular checkpointing is important to keep the journal from getting too long. Making a checkpoint immediately before backing up your system is good practice.
Creating a checkpoint
Checkpoints are not created automatically; someone or something must run the checkpoint command on the Perforce server machine. You can create a checkpoint by invoking the p4d program with the -jc (journal-create) flag:
This can be run while the Perforce server (p4d) is running.
To make the checkpoint, p4d locks the database and then dumps its contents to a file named checkpoint.n, where n is a sequence number. Before it unlocks the database, p4d also copies the journal file to a file named journal.n-1, and then truncates the current journal. This guarantees that the last checkpoint (checkpoint.n) combined with the current journal (journal) will always reflect the full contents of the database at the time the checkpoint was created.
(The sequence numbers reflect the roll-forward nature of the journal; to restore databases to older checkpoints, match the sequence numbers. That is, the database reflected by checkpoint.6 can be restored by restoring the database stored in checkpoint.5 and rolling forward the changes recorded in journal.5. In most cases, you're only interested in restoring the current database, which is reflected by the highest-numbered checkpoint.n rolled forward with the changes in the current journal.)
You can specify a prefix for the checkpoint and journal filename by using the -jc option. That is, if you create a checkpoint with:
your checkpoint and journal files will be named prefix.ckp.n, or prefix.jnl.n respectively, where prefix is as specified on the command line and n is a sequence number. If no prefix is specified, the default filenames checkpoint.n and journal.n will be used.
Note
|
The treatment of the argument to -jc has changed in Release 99.2!
Prior to Release 99.2, the files created with p4d -jc prefix would have been prefix.n (for the checkpoint) and journal.n (for the old journal).
The behavior in 99.2 is a change from that in previous releases; if you have scripts which rely on the old behavior, you may have to modify them.
|
As of Release 99.2, if you need to take a checkpoint but are not on the machine running the Perforce server, you can create a checkpoint remotely with the p4 admin command. Use
p4 admin checkpoint [prefix]
to take the checkpoint and optionally specify a prefix to the checkpoint and journal files. (You must be a Perforce superuser to use p4 admin.)
A checkpoint file may be compressed, archived, or moved onto another disk. At that time or shortly thereafter, the files in the depot subdirectories should be archived as well.
When recovering, the checkpoint must be at least as old as the files in the depots. (i.e. the versioned files can be newer than the checkpoint, but not the other way around.) As you might expect, the shorter this time gap, the better.
You can set up an automated program to create your checkpoints on a regular schedule. Be sure to always check the program's output to ensure that the checkpoint creation was successful. The first time you need a checkpoint is not a good time to discover your checkpoint program wasn't working.
If the checkpoint command itself fails, contact Perforce Technical Support immediately. Checkpoint failure is usually a symptom of a resource problem (disk space, permissions, etc.) that can put your database at risk if not handled correctly.
Journal files
The journal is the running transaction log that keeps track of all database modifications since the last checkpoint. It's the bridge between two checkpoints. If you have Monday's checkpoint and the journal that was collected from then until Wednesday, those two files (Monday's checkpoint plus the accumulated journal) contain the same information as a checkpoint made Wednesday. If a disk crash were to cause corruption in your Perforce database on Wednesday at noon, for instance, you could still restore the database even though Wednesday's checkpoint hadn't yet been made.
!Warning!
|
By default, the current journal file name is journal and it resides in the P4ROOT directory. However, if a disk failure corrupts that root directory, your journal file will be inaccessible too.
We strongly recommend that you set up your system so that the journal is written to a filesystem other than the P4ROOT filesystem. You can specify this from the command line, or set P4JOURNAL before starting the Perforce server to tell it where to write the journal.
|
To restore your database, you only need to keep the most recent journal file accessible, but it doesn't hurt to archive old journals with old checkpoints, should you ever need to restore to an older checkpoint.
Enabling journaling
For NT, if you used the installer (perforce.exe) to install a Perforce server or service, journaling will be turned on for you.
For UNIX Perforce server installations, or if you installed the server manually on NT, journaling will not be automatically enabled. In these cases, you should make a checkpoint soon after installing the Perforce server so that journaling is turned on as soon as possible. To enable journaling on such an installation, do one of the following:
- Create an empty file in the server root named journal in the server root directory, then start p4d, or:
- Set the P4JOURNAL environment variable to point to the desired location of the file, create an empty file with this name, then start p4d, or:
- Start p4d with the -J journalfile flag and ensure that subsequent checkpoints specify the same journalfile.
Be sure to create a new checkpoint with p4d -jc (and -J journalfile if required) immediately after enabling journaling. Once journaling is enabled, you'll need to start making regular checkpoints to control the size of the journal file. An extremely large current journal is a sign that a checkpoint is needed.
Every checkpoint after your first checkpoint starts a new journal file and renames the old one. The old journal is renamed to journal.n, (or prefix.jnl.n for Release 99.2 or later) where n is a sequence number, and a new journal file is created.
By default, the journal is written to the file journal in the server root directory (P4ROOT). Since there is no sure protection against disk crashes, the journal file and the Perforce server root should be located on different filesystems, ideally on different physical disk drives. The name and location of the journal can be changed by specifying the name of the journal file in the environment variable P4JOURNAL, or by providing the -J filename flag to p4d.
!Warning!
|
If you create a journal file with the -J filename flag, make sure that subsequent checkpoints use the same file, or the journal will not be properly renamed.
|
Whether you use P4JOURNAL or the -J journalfile option to p4d, the journal file name can be provided either as an absolute path, or as a path relative to the server root.
Disabling journaling
To disable journaling, stop the server, remove the existing journal file (if it exists), unset the environment (or registry, for NT) variable P4JOURNAL, and restart p4d without the -J flag.
Versioned files
Your checkpoint and journal files are used to reconstruct the Perforce database files only. Your versioned files are stored in directories under the Perforce server root, and must be backed up separately.
Versioned file formats
Versioned files are stored in subdirectories beneath your server root. Text files are stored in RCS format, with filenames of the form filename,v. There is generally one RCS-format (,v) file per text file. Binary files are stored in full in their own directories named filename,d. Depending on the Perforce file type selected by the user storing the file, there may be one or more archived binary files in each filename,d directory. If more than one file resides in a filename,d directory, each one refers to a different revision of the binary file, and is named 1.n, where n is the revision number.
As of Release 99.2, Perforce also supports the AppleSingle file format for Macintosh. On the server, these files are stored in full, compressed, just like other binary files. They are stored in the Mac's AppleSingle file format; if need be, these files can be copied directly from the server root, uncompressed, and used as-is on a Macintosh.
Because Perforce uses compression in the depot files, a system administrator should not rely on the compressibility of the data when sizing backup media. Both text and binary files are either compressed by the Perforce server (denoted by the .gz suffix) before storage, or are stored uncompressed. At most installations, if any binary files in the depot subdirectories are being stored uncompressed, they were probably incompressible to begin with. (e.g., images stored in a compressed format, video streams, etc.)
Back up after checkpointing
In order to ensure that the versioned files reflect all the information in the database after a post-crash restoration, the db.* files must be restored from a checkpoint that is at least as old as (or older than) your versioned files. For this reason, you should create the checkpoint before backing up the versioned files in the depot directory or directories.
While your versioned files can be newer than the data stored in your checkpoint, it is in your best interest to keep this difference to a minimum; in general, you'll want your backup script to back up your versioned files immediately after successfully completing a checkpoint.
|
|
To back up your Perforce server, perform the following steps as part of your nightly backup procedure:
- Verify the integrity of your server and add file signatures to any new files:
p4 verify //... p4 verify -u //...
You may wish to pass the -q (quiet) option to p4 verify. If called with the -q option, p4 verify will produce output only when errors are detected.
The first command (p4 verify) will recompute the MD5 signatures of all of your archived files and compare them with those stored when p4 verify -u was first run on them. It will also ensure that all files known to Perforce actually exist in the depot subdirectories; a disk-full condition that results in corruption of the database or archived files during the day can be detected by examining the output of these commands.
The second command (p4 verify -u) will update the database with MD5 signatures for any new file revisions for which checksums have not yet been computed.
By running p4 verify -u before the backup, you ensure that you create and store checksums for any files new to the depot since your last backup, and that these checksums are stored as part of the backup you're about to take.
The use of p4 verify is optional, but is good practice not only because it allows you to spot any server corruption before a backup is made, but it also gives you the ability, following a crash, to detect whether or not the files restored from your backups are in good condition.
Note
|
If your site is very large, p4 verify may take some time to run; you may wish to perform this step on a weekly basis rather than on a daily basis. For more about the p4 verify command, see "File verification by signature" on page 27
|
- Make a checkpoint by invoking p4d with the -jc (journal-create) flag, or by using the p4 admin command. Use one of:
or (as of Release 99.2 or higher)
Because p4d locks the entire database when making the checkpoint, you do not generally have to stop your Perforce server during any part of the backup procedure.
Note
|
If your site is very large (e.g. several GB of .db files), creating a checkpoint may take a considerable length of time. Under such circumstances, you may wish to defer checkpoint creation and journal truncation until times of low system activity. You might, for instance, archive only the journal file in your nightly backup, and only create checkpoints and roll the journal file on a weekly basis.
|
If you are using the -z flag to create a gzip-compressed checkpoint, the checkpoint file will be named as specified. If you want the compressed checkpoint file to end in .gz, you should explicitly specify the .gz on the command line.
- Ensure that the checkpoint has been created successfully before backing up any files. (After a disk crash, the last thing you want to discover is that the checkpoints you've been backing up for the past three weeks were incomplete!)
You can tell that the checkpoint command has completed successfully by examining the error code returned from p4d -jc, or by observing the truncation of the current journal file.
- Once the checkpoint has been created successfully, back up the checkpoint file, the old journal file, and your versioned files.
(If you don't require an audit trail, you don't actually need to back up the journal. It is, however, usually good practice to do so.)
Note
|
There are rare instances (e.g., users obliterating files during backup, or submitting files on Windows NT during the file backup portion of the process) in which your depot files may change during the interval between the time the checkpoint was taken and the time at which the depot files get backed up by the backup utility.
Most sites will not be affected by these issues; having the Perforce server available on a 24/7 basis is generally a benefit worth this minor risk, especially if backups are being performed at times of low system activity.
If, however, the reliability of every backup is of paramount importance, consider stopping the Perforce server before checkpointing, and restarting it after the backup process has completed. Doing so will eliminate all risk of the system state changing during the backup process.
|
You never need to back up the db.* files. Your latest checkpoint and journal contain all the information necessary to re-create them. More significantly, a database restored from db.* files is not guaranteed to be in a state of transactional integrity; a database restored from a checkpoint is.
NT
|
On Windows NT, if you make your system backup while the Perforce server is running, you must ensure that your backup program doesn't attempt to back up the db.* files.
If you try to back up the db.* files with a running server, NT will lock them while the backup program backs them up. During this brief period, the Perforce server will be unable to access the files; if a user attempts to perform an operation that would update the file, the server may fail.
If your software doesn't allow you to exclude the db.* files from the backup process, you should stop the server with p4 admin stop before backing up, and restart the server after the backup process.
|
|
|
If the database files become corrupted or lost, either because of disk errors, a hardware failure such as a disk crash, the database can be recreated with your stored checkpoint and journal.
There are many ways in which systems can fail; while this guide cannot deal with all of them, it can at least provide a general guideline for recovery from the two most common situations, specifically:
- corruption of your Perforce database only, without damage to your versioned files, and
- corruption to both your database and versioned files.
The recovery procedures for each failure are slightly different, and are discussed separately in the following two sections.
If you suspect corruption in either your database or versioned files, contact Perforce technical support.
Database corruption, versioned files unaffected
If only your database has been corrupted, (e.g. your db.* files were on a disk volume that crashed, but you were using symbolic links to store your versioned files on a separate physical disk), you need only re-create your database.
You will need:
- The last checkpoint file, which should be available from the latest P4ROOT directory backup.
- The current journal file - which should be on a separate filesystem from your P4ROOT directory, and which should therefore have been unaffected by any damage to the filesystem where your P4ROOT directory was held.
You will not need:
- Your backup of your versioned files; if they weren't affected by the crash, they're already up to date.
To recover the database
- Stop the current instance of p4d:
(You must be a Perforce superuser to use p4 admin.)
- Rename (or move) the corrupt database ("db.") files:
mv your_root_dir/db.* /tmp
The corrupt db.* files aren't actually used in the restoration process, but it's safe practice not to delete them until you're certain your restoration was successful.
- Invoke p4d with the -jr (journal-restore) flag, specifying your most recent checkpoint and current journal. If you explicitly specify the server root ($P4ROOT), the -r $P4ROOT argument must precede the -jr flag:
p4d -r $P4ROOT -jr checkpoint_file journal_file
This will recover the database as it existed when the last checkpoint was taken, and then apply the changes recorded in the journal file since the checkpoint was taken.
Note
|
If you're using the -z (compress) option to compress your checkpoints upon creation, you'll have to restore the uncompressed journal file separately from the compressed checkpoint.
That is, instead of using:
p4d -r $P4ROOT -jr checkpoint_file journal_file
you'll use two commands:
p4d -r $P4ROOT -jr -z checkpoint_file.gz p4d -r $P4ROOT -jr journal_file
You must explicitly specify the .gz extension yourself when using the -z flag, and ensure that the -r $P4ROOT argument precedes the -jr flag.
|
Check your system
Your restoration is complete. See "Ensuring system integrity after any restoration" on page 22 to make sure your restoration was successful.
Your system state
The database recovered from your most recent checkpoint, after you've applied the accumulated changes stored in the current journal file, will be up to date as of the time of failure.
After recovery, both your database and versioned files should reflect all changes made up to the time of the crash; no data should have been lost.
Both database and versioned files lost or damaged
If both your database and your versioned files were corrupted, you need to restore both the database and your versioned files, and you'll need to ensure that the versioned files are no older than the restored database.
You will need:
- The last checkpoint file, which should be available from the latest P4ROOT directory backup.
- Your versioned files, which should be available from the latest P4ROOT directory backup.
You will not need:
- Your current journal file. The journal contains a record of changes to the metadata and versioned files that occurred between the last backup and the crash; because you'll be restoring a set of versioned files from a backup taken before that crash, the checkpoint alone contains the metadata useful for the recovery, and the information in the journal is of limited or no use.
To recover the database
- Stop the current instance of p4d:
(You must be a Perforce superuser to use p4 admin.)
- Rename (or move) the corrupt database ("db.") files:
mv your_root_dir/db.* /tmp
The corrupt db.* files aren't actually used in the restoration process, but it's safe practice not to delete them until you're certain your restoration was successful.
- Invoke p4d with the -jr (journal-restore) flag, specifying only your most recent checkpoint:
p4d -r $P4ROOT -jr checkpoint_file
This will recover the database as it existed when the last checkpoint was taken, but not apply any of the changes in the journal file. (The -r $P4ROOT argument must precede the -jr flag.)
The database recovery without the roll-forward of changes in the journal file will bring the database up to date as of the time of your last backup. In this scenario, you do not want to apply the changes in the journal file, because the versioned files you restored reflect only the depot as it existed as of the last checkpoint.
To recover your versioned files
- After recovering the database, you will then need to restore the versioned files according to your system's restoration procedures (e.g. the UNIX restore(1) command) to ensure that they are as new as the database.
Check your system
Your restoration is complete. See "Ensuring system integrity after any restoration" on page 22 to make sure your restoration was successful.
Note that files submitted to the depot between the time of the last system backup and the disk crash will not be present in the depot.
Note
|
Although "new" files (submitted to the depot but not yet backed up) will not appear in the depot after restoration, it's possible (indeed, highly probable!) that at one or more of your users will have up-to-date copies of such files present in their client workspaces.
Your users can find such files by using Perforce to examine how files in their client workspaces differ from those in the depot. If they run:
p4 diff -se
...they'll be provided with a list of files in their workspace which differ from the files Perforce believes them to have. After verifying that these files are indeed the files you wish to restore, you may wish to have one of your users open these files for edit and submit them to the depot in a changelist.
|
Your system state
After recovery, your depot directories may not contain the newest versioned files (i.e., files submitted after the last system backup but before the disk crash may have been lost).
- In most cases, the latest revisions of such files can be restored from the copies still residing in your users' client workspaces.
- In a case where only your versioned files (i.e., and not the database, which may have resided on a separate disk unaffected by the crash) were lost, you may also be able to make a separate copy of your database and apply your journal to it in order to examine recent changelists to track down files submitted between the last backup and the disk crash.
In either case, contact Perforce technical support for further assistance.
Ensuring system integrity after any restoration
After any restoration, it's wise to run p4 verify to ensure the versioned files are at least as new as the database:
This command will verify the integrity of the versioned files. Because the -q (quiet) option has been selected, the only output will be error conditions. Ideally, this command should produce no output.
If any versioned files are reported as MISSING by the p4 verify command, you'll know that there is information in the database concerning files that didn't get restored. The usual cause is that you restored from a checkpoint and journal made after the backup of your versioned files. (i.e. that your backup of the versioned files was older than the database.)
If (as recommended) you've been using p4 verify -u to generate and store MD5 signatures for your versioned files as part of your backup routine, you can run p4 verify on the server after restoration to reassure yourself that your restoration was successful.
If you have any difficulties restoring your system after a crash, contact Perforce Technical Support for assistance.
|