High availability setup

High Availability can be applied to a Helix TeamHub cluster installation. The benefits of high availability include: on demand scalability, zero-downtime maintenance, and maximum availability of TeamHub.

Step 1: Before you begin

Before applying High Availability to your Helix TeamHub cluster setup, make sure that you have completed the following steps:

Step 2: Scaling up with load balancer

As mentioned in HA Deployment, an SSL load balancer is required, this decrypts SSL connections and balances requests across the TeamHub Web servers.

The TeamHub package does not include a load balancer, it must be installed separately. For instructions on installing and configuring a load balancer, see Setting up HAProxy.

Step 3: Mounting shared storage

With a load balancer, the user requests will be randomly distributed across the cluster nodes, so the data will be out of sync immediately. To fix this issue, attach the same shared storage to each TeamHub Web server. If existing storage with a clustered file system doesn't exist, contact the Support team for further help.

When shared storage is available, stop TeamHub and then mount the storage to /var/opt/hth/shared. When you have mounted the storage, bring TeamHub back online:

sudo hth-ctl stop
sudo mv /var/opt/hth/shared /var/opt/hth/shared.bak
# Mount storage to /var/opt/hth/shared and sync back the data
sudo rsync -av /var/opt/hth/shared.bak/ /var/opt/hth/shared/
rm -rf /var/opt/hth/shared.bak
sudo hth-ctl start

Step 4: Synchronizing SSH host keys

Since the SSH host keys will differ between the cluster nodes, they need to be synchronized. The TeamHub configuration process can use the ssh directory on the shared storage and copy the SSH host keys to the /etc/ssh directory. This enables every new TeamHub Web server added to the cluster to have the same SSH host keys. On the first TeamHub Web server, create the directory and copy SSH host keys:

mkdir -p /var/opt/hth/shared/ssh
cp /etc/ssh/ssh_host_* /var/opt/hth/shared/ssh/
chown root.root /var/opt/hth/shared/ssh/*
chmod 600 /var/opt/hth/shared/ssh/*

Step 5: Adding more Helix TeamHub Web servers

After you have performed the steps above, additional TeamHub Web servers can be added to the cluster. Because the TeamHub Configuration file is stored on a shared partition /var/opt/hth/shared, install the TeamHub Web package and reconfigure it. TeamHub will automatically pick up the required configurations.

Note

The TeamHub Web machines must also meet the prerequisites, see Step 1: Before you begin in Cluster setup.

Install using repositories

Install the package itself as root (recommended). If you have downloaded the TeamHub package, see Manually install from a downloaded Helix TeamHub package.

RHEL and CentOS

  1. Configure the Perforce repository if you have not already. See Configuring the Perforce repository.
  2. Run the following command to install the TeamHub package:

    sudo yum install hth-web

Ubuntu

  1. Configure the Perforce repository if you have not already. See Configuring the Perforce repository.
  2. Run the following commands to install the TeamHub package:

    sudo apt-get update
    sudo apt-get install hth-web

Manually install from a downloaded Helix TeamHub package

Upload the hth-web package to the server designated for Web application role, install the package as root.

RHEL and CentOS

rpm -ivh hth-web-X.X.X-stable.el7.x86_64.rpm

Ubuntu

dpkg -i hth-web_X.X.X_amd64.deb

Step 6: Changing hostname

At this stage everything should be up and running, and requests should be distributed across all Helix TeamHub Web servers. However, TeamHub is still bootstrapped with the hostname of the first TeamHub Web server installed. To fix this, go to TeamHub Admin and change the hostname to the load balancer.