High Availability Setup

High Availability can be applied to a Helix TeamHub cluster installation. The benefits of high availability include: on demand scalability, zero-downtime maintenance, and maximum availability of Helix TeamHub.

Step 1: Before you begin

Before applying High Availability to your Helix TeamHub cluster setup, make sure that you have completed the following steps:

Step 2: Scaling up with Load Balancer

As was mentioned in HA Deployment, an SSL load balancer is required, which will decrypt SSL connections and also balance requests across the Helix TeamHub Web servers.

The Helix TeamHub package does not include load balancer, therefore it needs to be installed separately. The following guide is recommended to setup the load balancer first if none exists.

Step 3: Mounting Shared Storage

With a load balancer, the user requests will be randomly distributed across the cluster nodes, so the data will become immediately out of sync. To fix this issue, attach the same shared storage to each Helix TeamHub Web server. If existing storage with a clustered file system doesn't exist, contact the Support team for further help.

After shared storage is available, stop Helix TeamHub, then mount the storage to /var/opt/hth/shared. Next, bring Helix TeamHub back online:

sudo hth-ctl stop
sudo mv /var/opt/hth/shared /var/opt/hth/shared.bak
# Mount storage to /var/opt/hth/shared and sync back the data
sudo rsync -av /var/opt/hth/shared.bak/ /var/opt/hth/shared/
rm -rf /var/opt/hth/shared.bak
sudo hth-ctl start

Step 4: Synchronizing SSH Host Keys

Since the SSH host keys will differ between the cluster nodes, they need to be synchronized. Helix TeamHub configuration process can use the ssh directory on the shared storage and copy the SSH host keys to the usual /etc/ssh. This will enable every new Helix TeamHub Web server added to the cluster to have the same SSH host keys. So on the first Helix TeamHub Web server, create the directory and copy SSH host keys:

mkdir -p /var/opt/hth/shared/ssh
cp /etc/ssh/ssh_host_* /var/opt/hth/shared/ssh/
chown root.root /var/opt/hth/shared/ssh/*
chmod 600 /var/opt/hth/shared/ssh/*

Step 5: Adding More Helix TeamHub Web Servers

After you have performed the steps above, additional Helix TeamHub Web servers can be added to the cluster. Because the Helix TeamHub Configuration file is stored on a shared partition /var/opt/hth/shared, simply install the Helix TeamHub Web package and reconfigure it, and Helix TeamHub will automatically pick up the needed configurations.

Note

The Helix TeamHub Web machines must also meet the prerequisites, see Step 1: Before you begin in Cluster Setup.

Install using repositories

Install the package itself as root (recommended). If you have downloaded the TeamHub package, see Manually install from a downloaded TeamHub package.

RHEL and CentOS

  1. Configure the Perforce repository if you have not already done so, see Configure the Perforce repository.
  2. Run the following command to install the TeamHub package:

    sudo yum install hth-web

Ubuntu

  1. Configure the Perforce repository if you have not already done so, see Configure the Perforce repository.
  2. Run the following commands to install the TeamHub package:

    sudo apt-get update
    sudo apt-get install hth-web

Manually install from a downloaded TeamHub package

Upload the hth-web package to the server designated for Web application role, install the package as root.

RHEL and CentOS

rpm -ivh hth-web-X.X.X-stable.el7.x86_64.rpm

Ubuntu

dpkg -i hth-web_X.X.X_amd64.deb

Step 6: Changing Hostname

At this stage everything should be up and running, and requests should be distributed across all Helix TeamHub Web servers. However, Helix TeamHub is still bootstrapped with the hostname of the first Helix TeamHub Web server installed. To fix this, go to Helix TeamHub Admin and change the hostname to the load balancer.