While Perforce is a VERY fast SCM system, the fastest available in my experience, there are things you can do to maintain that fast performance even on a very large system.
Hardware items to consider for performance:
- Always choose the fastest processor and bus speeds available to you, and upgrade your system more frequently than the normal upgrade cycle if your server is growing rapidly.
- The Linux OS with the XFS file system provides the best performance possible. Typically up to 30% better than Windows and 10-20% better than Solaris on the same hardware.
- Ideally, set your system up with three physical volumes: one for metadata (RAID 10), one for the journal and logs (RAID 10), and one for the archive data (RAID 5).
- Using a separate controller for the metadata volume will provide the highest level of performance.
- Giving the system enough RAM to cache all of the metadata will provide the highest level of performance, but very acceptable performance can be achieved by providing enough RAM to cache roughly half of the size of the overall metadata files. This is a rough estimate, and will vary with site usage patterns.
- On an extremely busy Perforce server, you may consider putting your metadata on a RAM based volume (Solid State Device, not a ram volume in memory) to provide the highest level of performance.
Usage and maintenance items to consider for performance:
Use proxy servers at remote sites to improve sync times for remote users
- Maintain your database to keep the size of the db files to a minimum. The smaller the database files are, the more of them will fit into memory. (See my post on database maintenance.)
- Restore from a new checkpoint after doing database maintenance to reduce the size of the db files.
- Put active Perforce servers on dedicated hardware. Running more than one Perforce server on a single machine causes the system to swap the metadata files for each instance back and forth in the cache. In the cases of less active servers, this swapping may not be noticiable, but on an active server, the swapping with slow it down.
- Avoid allowing users to access remote depots. Remote depots were designed to allow an admin to integrate files from a remote server into the local server, and for the users to access the local branch of those files, not the remote depot directly.
- Avoid the use of exclusionary mappings in your protect table. It is better to have a longer table that grants specific access than a short one that uses a lot of exclusionary mappings.
- Avoid using multiple wildcards in client/workspace views and in the protection table mappings.
- Make sure users are using narrow workspace views. In other words, only map what you need to work on into your workspace view, not the entire depot. Perforce command scope is limited by your workspace view.
In addition to all the items above, you may consider using sparse branching if you branch frequently, and your branches typically involve a large number of files. Basically sparse branching is only branching what you need to modify instead of the whole source branch and using your workspace view to pull the two branches together. For example:
Assuming you only branched the needed directories from //depot/project/... to //depot/projectbranch/..., the mapping above would pull the branched directories from projectbranch, and all of the other directories from project, and put them together in a single area call project on your hard drive. The order is important in the view since Perforce overrides early mappings with later mappings in the workspace view.