- Helix Core 2022.1May 2022
- Helix Core 2021.2November 2021
- Helix Core 2021.1May 2021
- Helix Core 2020.2November 2020
- Helix Core 2020.1April 2020
- Helix Core 2019.2November 2019
- Helix Core 2019.1May 2019
- Helix Core 2018.2November 2018
- Helix Core 2018.1March 2018
- Helix Core 2017.2October 2017
- Helix Core 2017.1May 2017
- Helix Core 2016.2November 2016
- Helix Core 2016.1July 2016
Now you can easily inspect the root causes of server issues as the issues are actively occurring.
Admins can now “turn up” the monitor level of their server and the change will monitor all running processes. This should lead to less unplanned downtime, as admins can identify p4 commands that have been holding locks on specific database tables for long periods and see which other p4 commands are waiting for those locks to be released. When the root cause of the problem has been resolved, admins “turn down” the monitor level. Customers can pair this enhancement with real-time monitoring (released in Helix Core 2021.1) where real-time monitoring can raise the alarm bells and real-time debugging can help efficiently identify the source of the problem.
To do this, set a new server configurable, rt.monitorfile, from the command line interface. Once this configurable is set, any changes to the existing server configurable, monitor, that result in a value of 10+ (10 or 25), will unlock the full potential of the enhancement.
Reuse components across projects by isolating the “common component” into its own stream.
No longer do you have to manually keep stream views in sync with one another. Instead, you can now define component relationships between streams and test these relationships in isolation before checking in changes to the stream spec.
You can define other streams as components in the new “Components” section of the stream spec. Defining a component requires you to specify a:
- “Component type”,
- “Component folder”, and
- “Component stream”.
For this initial release, the only valid “component type” is “read-only”. This means that consuming streams cannot submit updates to the component streams via their import+ paths. The “component folder” is the directory prefix for each component view file. The “component stream” is the name of the stream that you are defining as a component. This can be specified as a specific change revision using either @change or @label (automatic label). Component nesting is also supported (ex: stream A defines stream B as a component and stream B defines stream C as a component). Circular component relationships are disallowed, however.
Failback of Planned Failover
Save time on the process of failback and failover with a new sequence of commands that eliminate reseeding.
Reseeding is no longer a required manual step! A new sequence of commands make it simpler to orchestrate the process of failing over, swapping server roles, and eventually restoring the original server roles.
After failover (using ‘p4 failover’), use the command p4d -Fm, which will reconfigure and get the server ready to be the standby. When you are ready for failback, run the `p4 failback` command on that standby to restore the old server. Then another command, p4d -Fs, will reconfigure the former standby to become the standby again.
Other Notable Enhancements
Changes to the 'p4 print' command: The command now includes 'offset' and 'size' flags to print out partial contents of a file instead of the entire file contents. This enhancement is compatible with both text and binary file types.
Changes to text files: To improve the performance of the network and replication, text files stored as compressed now remain compressed through the transfer phase to the client. The client is now responsible for the decompression. This feature applies to clients 2022.1 and later and does not apply to +k type text files.
Changes to `p4 sync/revert/clean/integ/copy/merge/undo/submit` commands: These now have a '-K' flag that will suppress the expansion of keywords in +k type files.
Improvements to stream deletion handling: Stream deletion is now tied to a changelist, making it possible to find the history of stream deletions as well as view the stream spec of a deleted stream @change. It is also now possible to obliterate a deleted stream’s metadata.
It has never been easier to understand how changes to an asset can impact existing streams!
Streams now supports a key traceability feature named “view match”. Simply pass in a depot path — which could represent a folder or a file — and see all streams that are generating a view that includes that depot path. You can optionally filter the results by criteria such as path type (i.e., only show me streams which are importing a specific depot path). This feature helps give you better understanding the ramifications of updating an asset. For example, when you need to fix a bug in a source file and want to understand all streams that may be importing that source file.
Streams now also supports one-to-many mapping functionality (i.e., import&). This means a depot path can map to more than one location in a workspace. For example, you can import two different revisions of the same file into a separate location within a workspace. This can be useful when working with images, where you may want two different revisions of the same image imported into a project. Files imported via one-to-many mapping are read-only.
Global Lock Visibility from Edge Servers
This big change to global locks lets you stop throwing away work and increase your development velocity.
Users connected to edge servers can now understand if another user has an asset globally locked via a different edge (or commit) server, regardless of file type and file type modifiers. This prevents the scenario where two different users are connected to different edge servers but are working on the same non-mergeable file – which can result in the need to throw away one set of changes. Reporting of such locks when users open files can be enabled with a new server configurable `dm.open.show.globallocks` and the p4 fstat command has a new `-OL` flag for requesting the global lock information.
Server Topology Reporting (Tech Preview)
A new p4 command has been added to make it easier to train new Helix Core administrators, help Perforce Support troubleshoot customer issues, and engage with our Perforce Consulting team for topology reviews.
Keeping track of your Helix Core servers can be a challenging task, especially as you deploy new servers when you scale. A new command, p4 topology, can be initiated by super and operator users to list all Helix Core servers that are directly and indirectly connected to the server on which the command is run. This includes standard, commit, replica (including edge), proxy, and broker servers.
Please reach out to us with feedback so we can improve the feature and remove the “tech preview” label from it in a future release.
Great news for admins: you will have fewer questions from end users about trusting new server fingerprints.
P4TRUST is no longer required for SSL connections where the server provides a certificate that's not self-signed and can be verified by the client. Clients based on the 2021.2 C/C++ P4API (including derived APIs) will now attempt to verify the server's SSL certificate against the local system's CA certificate store. One major benefit of this enhancement is that it will now be much easier to roll certificates or change server IP addresses. For customers with thousands of clients this is a game changer since clients will not be prompted to trust the new fingerprint.
P4TRUST can still be used in conjunction with certificates, which is likely desired when the certificate hasn’t been issued by a Certificate Authority, and P4TRUST can be set based on hostname in this use case.
Ever wanted an easier way to spot a potential problem prior to hearing about it from your end users? Real-time monitoring is now a possibility within Helix Core!
With added integration with leading tools like Prometheus, you can monitor metrics like the number of commands waiting on a lock, the number of current client connections to a given server, and the number of bytes of journal a replica is behind by without having rely on retroactive log output. Check our blog for more detail on implementing real-time monitoring.
Our Professional Services team can help you install a comprehensive Prometheus/Grafana monitoring for your Helix Core topology with dashboards and alerting based on the new real-time metrics as well as other key metrics as described:
In addition, for those using Prometheus, a new utility, p4mon-prometheus-exporter, is also now available for your convenience on the Linux x86_64 platform. Look for additional monitoring metrics in future releases based on your feedback.
Files residing within task streams can now be obliterated as desired with the newly added “-T” flag to the existing p4 obliterate command. This is helpful for reclaiming disk space or for cleaning up mistakes made by users who create file hierarchies in the wrong place.
Stream switching across depots is now officially supported as well, with the most common use case being a task stream that lives in a different depot than its parent stream. Look for added P4V support for these stream enhancements soon.
The p4 verify command is used to ensure server archives (depot files) are complete and without corruption. A new --only filter flag, where the actual filter values can be either BAD or MISSING, can be used to return only a subset of results from the verify command. The MISSING filter will avoid checksum calculations during the verification process for added efficiency. These filters can also be used with the -t flag to transfer only those matching files.
You can also now use a -R flag to repair missing files if identical content is found in an existing shelf. Lastly, optimizations have been added to the -z flag to improve memory usage. Gain more trust in the state of your archives in less time!
Integration Engine Enhancements
Multiple integration scenarios have been improved in this release of Helix Core. The first is when a file is moved or renamed and then a second file is added under the original name as the first file. The readded file can now be resolved after all other integrations have been resolved and all resolves are now submitted together.
In addition, a new configurable named dm.resolve.ignoredeleted has been added to change the default behavior of the scenario where a file has been deleted in branch A but not in branch B.
Lastly, it is now possible to move or rename a file where either the target path is a substring of the source path, or vice-versa.
License by MAC Address
With more and more customers deploying at least parts of their Helix Core infrastructure to the cloud, it is now possible to issue licenses tied to a MAC address instead of an IP address. As IP addresses tend to be more fluid in the cloud, a MAC address will alleviate the need to request duplicate licenses over time.
Stream Spec Integration
Configuration as code is an inspirational mantra for many of our customers. Just like you’ve been able to integrate file changes across Streams, you can now integrate stream spec changes (for propagatable field values) across Streams as well.
In fact, you can even integrate file changes and stream spec changes in the same atomic operation (if desired). The integration status indicator (istat) can now report what content needs to be integrated across Streams (only source code changes, only Stream spec changes, or source code changes, and/or Stream spec changes).
Stream spec integration can be turned on or off globally via a new server configurable. Don’t reinvent the wheel, manage your configuration as code with Helix Core!
Controlled Stream Views
Historically, child streams automatically inherited the views (paths, remapped, and ignored field values) of their ancestor (parent) streams. If you are using component based development (CBD), this complicated release streams that may want to be frozen in time, for example.
A new setting, Parent View, now exists on stream specifications to control view inheritance. Existing streams can be converted from “inherit” (the longstanding behavior) to “noinherit” (and vice versa). In addition, when converting to “inherit” to “noinherit”, you can have the system add comments into the stream spec detailing out where the views originated from. Customers can control the initial parent view setting based on a new server configurable. Easily employ CBD best practices with Helix Core today!
Shelved File Storage Optimizations
Shelve operations involving files will now only create new archive content when necessary. If a given file already exists via another shelf, instead of duplicating the archive content, the new shelf will simply reference the existing archive content.
This can result in substantial file storage savings if you frequently employ the shelve operation (ex: using Helix Swarm for code reviews) or if you are shelving very large files. This enhancement applies to new shelves created post-upgrade only. Scaling with Helix Core just got even easier!
Minimal Downtime Upgrade Enhancements
A new “p4 upgrades” command now allows you to monitor the status of an upgrade (ex: pending, running, completed, etc.) for a particular server or across all upstream servers in a multi-server environment. In addition, all upgrade steps introduced in versions 2019.2+ will now execute in the background, which can improve server availability and replication performance during an upgrade event. Easily stay up-to-date with new major Helix Core releases.
P4 Failover Enhancements
If a failover attempt does not succeed for any reason, any journalcopy or pull threads that were stopped will now be automatically restarted for you. Additionally, a failover event now displays pertinent details about the source and target of the failover process at the start of the command, including in a preview mode. This gives you a necessary sanity check before invoking a failover event. Fail over with confidence using built in functionality instead of having to write your own custom scripts!
SSO with Helix Authentication Service (HAS)
Simplify and standardize your authentication process with Helix Authentication Service (HAS). It enables you to integrate Helix Core (including clients and plugins), Helix ALM, and Surround SCM with your organization's Identity Provider (IdP). When used in conjunction with Helix Core, it requires the use of the Helix Authentication Extension
HAS currently supports the OpenID Connect and SAML 2.0 authentication protocols. This service is internally certified with Microsoft Azure Active Directory (AAD), Okta, and Google Identity.
The P4 Protections Table has been optimized to allow you to more easily manage permissions. If you have a complex network topology, you can now assign multiple IP filters to a single protects entry.
If you have federated architecture and have a large number of protections entries, the replication overhead has been reduced. This reduces overall network traffic and wait time for file access in certain scenarios. Also included in this release is the ability to check protections based on a particular host.
P4 Failover Enhancements
Monitor the status of a target server via the new p4 heartbeat command. This more proactively identifies conditions in which a manual p4 failover event should be considered. New triggers and extensions include:
New configurables are also available to tailor heartbeat monitoring to your needs. These include setting the desired interval time between heartbeat requests, defining the number of consecutive missed heartbeats to be declared dead, and more.
Stream Spec Permissions
You can set permissions for configuration changes just like files. Permission options on stream specs include read, open, and write. These permissions are also fully supported by P4Admin — the GUI for administrating Helix Core connections, depots, users, and groups.
Global Logging IDs
When working with structured logs, you can now correlate activity between two servers. Using the global logging ID, you can link activity on one server — for example an edge server —with activity on another server, like a commit. This makes it easier to troubleshoot issues across your topology.
Customize Stream Spec Definitions
Admins can now add additional fields to a stream spec and have custom field IDs automatically assigned. These fields can store additional metadata related to a stream, and convey information about your particular framework.
This is especially useful if you have implemented component based development (CBD) or IP re-use for your team/organization. As an added bonus, these custom fields will also be visible in P4V. Check out the new p4 streamspec command for more information.
Independently Upgrade Your Servers
Keep your teams working during major version upgrades with improved upgrade performance. Now it is possible to achieve minimal downtime for end users when upgrading a federated installation of servers.
Update commit and edge servers separately without having to take all servers offline at the same time. This applies only when upgrading from 19.1 to 19.2 and beyond. It saves you time and ensures your teams remain productive.
Structured Log Versioning
Now you can control when you want to move to an updated schema to take advantage of new improvements. With this release, structured logs have a formal, versioned schema that you can use and test before upgrading.
This decreases your risk of experiencing unanticipated “breaking changes” to existing internal processes or monitoring that is connected to structured logs. This release also:
- Enables easier integration with external log monitoring solutions.
- Adds previously missing database statistics.
Version Server Configurables
When a server config variable is changed, a versioned history will now be stored on the server. View the server configurables history with the new p4 configure history command.
This gives you better understand historical changes that have been made over time. And you can more easily troubleshoot issues that might happen due to changing server settings.
Verify Files More Quickly
Verify archived files easier with the new “-Z” flag for p4 verify which is built on top of db storage. It allows you to quickly search through your system without needing to look through lazy copied files.
Obliterate in One Step
Purge and archive in a single step with the new “-p” flag for p4 obliterate. Now you can remove versioned files and keep the metadata with just one command.
Find Orphaned, Archived Files
Scan archives to find files that were left over from failed submits or archive-skipping obliterates. Orphaned archive file detection utilizes db.storage, which was introduced in 2019.1. Look for the new “-l” and “-d” flags for the p4 storage command.
Increase Performance for p4 obliterate
The p4 obliterate operation is now faster and less resource-intensive. Files are computed in a fraction of the time compared with previous versions. Dramatic improvements have been noted in benchmarking. Contact your account team to learn more.
Improve Productivity With p4 submit Option
A new option improves developer productivity for p4 submit. Admins can configure p4 submit to initially commit only meta-data from the edge server to the commit server. The actual file content transfer is scheduled in the background for reverse replication. Developers do not need to wait for the files to transfer. This option can be set as the default transfer method. Depending on the size of files, and frequency, the performance improvement can be dramatic.
Customize Workflows and Extend Product Functionality
Extensions are a new way for administrators to customize workflows and extend the functionality of the server. Triggers and extensions perform similar functions. Extensions are written in the Lua language, and are versioned and managed in their own central depot as self-contained bundles of code, metadata, and other assets. They have a closer integration with tools, and include a programmatic API. The extension code runtime is embedded within the server executable, so you no longer need to update external language systems on each of your servers.
NOTE: This functionality is currently unavailable for the Windows platform.
Gain Flexibility With Private Editing of Streams Spec
gains new flexibility. Developers can privately edit stream specs, modifying only their own workspace without impacting other users or the project. Then code changes and/or stream spec changes can each be committed atomically.
Enhance Visibility for Git LFS Locking
Helix Core 2019.1 supports Git LFS locks. Locks can be created and are visible using Git clients or the P4 commandline. Using Perforce clients, Git LFS locks are seen when you try to open the locked file. With Git clients, the lock is displayed during a commit.
Simpler Command to Execute Failover
The new P4 failover command simplifies the process of initiating failover from a master to a standby server. This command consolidates several discrete tasks into one command line with arguments, and enhances administrative control for planned and unplanned service interruptions. You can also designate an optional “mandatory” standby server to better prepare your failover strategy.
Support for SAML 2.0 Authentication
Integrate your 2018.2 Helix Core server and clients with Helix SAML to authenticate users via the command-line or client using popular solutions, such as Ping Identity, Okta, and others.
The 2018.1 release provides reliable and consistent replication of standby servers in a disaster recovery (DR) setup. Failover to a standby server in a DR scenario is significantly faster due to multiple steps being removed from the process via automation and a new replication method. This improved operational efficiency will result in reduced time and cost to recover service from a failure.
Admins can now hide the following sensitive server information from appearing within “p4 info”: server name, server address, server uptime, and server license IP address. For example, the license entry in “p4 info” will show “licensed” or “unlicensed.”
Improved Streams Usability
The 2018.1 release improves the usability of Streams by no longer requiring users to write long specs. Now, you can use wildcards to represent which folder and/or files are included in the path name of a Stream view.
Speed Up Remote Operations Even More with WAN Accelerators
The 2017.2 release allows you to use WAN acceleration technologies. WAN acceleration can be a great enhancement that dramatically increases the likelihood that remote sites are in sync at all times with central servers in a Perforce federated architecture deployment because the replication is quicker — even with extremely large files.
Boost Stability and Performance
Parallel sync operations are one of several techniques Perforce employs to make Helix Core the fastest VCS server on the planet. We’ve improved server resilience under load to support a greater number of simultaneous requests.
Faster File Transfers. Much Faster.
Transfer files over high latency networks up to 16 times faster than in 2016.2, even in federated deployments of Helix Core. See similar performance improvements for any communication between any server type — replicas, edge servers, commit servers, and proxies.
Support Git at Scale for Distributed Teams and CI
Helix Versioning Engine now supports Git at scale for distributed teams and multiple repos. Helix4Git speeds up builds by 40 to 80 percent and reduces storage by up to 18 percent with different mirroring options. For specifics, see Helix4Git.
Change Your File's Name or Location in a Single Command
Gone are the days when a “p4 move” command would require you to perform a “p4 edit” first. The 2017.1 release allows you to change the name or location of your files with a single command.
Identify and Sync Code Changes across Multiple DVCS Servers
Commands like “p4 files” and “p4 sync” now support using changelist identity so that it’s faster and easier to search, browse, and take actions on code changes that are shared across multiple Helix DVCS servers with different datasets.
Protect Your Servers from Outdated TLS Versions
Outdated cryptographic protocols compromise communications security over a computer network. This release enables greater flexibility to achieve tighter security by allowing admins to specify the minimum and maximum allowed TLS versions, preventing clients using other versions from connecting.
Submit Build Artifacts from Partitioned Clients
We gave you read-only clients, and you wanted more. We listened. Now you can submit build artifacts from partitioned clients to Helix while keeping new workspaces light and easily disposable for all your DevOps needs.
Turn Back Without Destroying History
Everybody makes mistakes. Easily undo an entire change list in a single operation, while retaining audit trails for enhanced security and compliance. Helix forgives, but doesn’t forget.
Expand your circle of trust and delegate permissions over a specific depot or path to a specific group or user so you don’t have to carry the weight of system administration on your shoulders.
What's New in Core 2016.1?
Increase Collaboration Efficiency Among Distributed Team Members
Speed up your DVCS collaboration workflow. Now you can work locally, propose a change, create a new shelf for it, and push it to a remote server for a quick review before submitting it to the mainline. It’s faster and takes up less space in storage.
Ditto Mapping for Greater Flexibility
Stop jumping through hoops to make workspace mapping comply with your needs. Ditto mapping grants programmers the freedom to work on blocks of complex code individually, working from shared libraries without months of build-failure frustration.
Customize Your P4 Aliases
Unleash your inner James Bond and give your most-used commands p4 aliases that make logical sense to you. Or, go into stealth mode and string several p4 commands together to trigger behind-the-scenes processes that simplify workflows.
Boost Automation with 'p4 reshelve'
Tool builders will love boosting automation in Helix with the p4 reshelve command.