NAV Navbar
Switch version:

Initial setup

Assumption: You already have a setup resembling Figure 1, with a GoCD Server which uses an external Postgres database. To know more please read our documentation on Enabling GoCD server to use an external Postgres database.

Enable replication on the primary Postgres instance

The recommended replication setup is Postgres’ streaming replication with log shipping . In this case, the two Postgres servers, called “Primary” and “Standby”, will be setup such that the standby continuously replicates the primary. Along with this, log shipping will be setup. This requires a network drive which is shared between the two Postgres servers. Log shipping allows the replication to continue even if one of the Postgres servers has to be restarted briefly.

  1. As log shipping needs a shared drive, it is assumed that you have a shared drive mounted at /share, on both the Postgres server hosts. This acts as a bridge between the two.
  2. On the primary Postgres instance, enable a replication user by running this as superuser:


    In the example above, the replication user, “rep”, has a password “rep”.

  3. Then, give the replication user enough permission to login to the primary Postgres instance, from the standby Postgres instance. This is done by adding this to pg_hba.conf:

    # pg_hba.conf
    host  replication  rep  <ip_address_of_standby_postgres_server>/32  md5
  4. The primary Postgres server is nearly ready. It now needs to be setup to allow replication. Update postgresql.conf with these options:

    archive_mode = on
    archive_command = 'test ! -f /share/primary_wal/%f && (mkdir -p /share/primary_wal || true) && cp %p /share/primary_wal/%f && chmod 644 /share/primary_wal/%f'
    archive_timeout = 60
    max_wal_senders = 1
    hot_standby = on
    wal_level = hot_standby
    wal_keep_segments = 30

    Learn more about these options at Archiving WAL files and Replication.

  5. Restart the primary Postgres server.

Setup a standby Postgres instance for replication

Given that the primary Postgres instance has been setup for replication, the standby Postgres instance needs to be setup with an initial backup of the primary instance, and then setup to continuously replicate from the primary.

  1. Ensure that the version of the Postgres instance on the standby is the same as the version of that on the primary.
  2. Choose an empty directory to serve as the data directory for the new instance, and create a base backup from the primary Postgres instance. This is how a base backup is taken:

    pg_basebackup -h <ip_address_of_primary_postgres_server> -U rep -D <empty_data_directory_on_standby>

    At this point you should also look at your Secondary Postgres Connection and Authentication settings for the secondary server. For e.g You might have to alter postgresql.conf on your secondary server and change “listen_addresses” property, so that it reflects the secondary node host.

  3. Setup the standby instance to replicate from the primary instance. Create a file called recovery.conf in the Postgres data directory (the one used in pg_basebackup above) and populate it with:

    On Linux:

    standby_mode = on
    primary_conninfo = 'host=<ip_address_of_primary_postgres_server> port=5432 user=rep password=rep'
    restore_command = 'cp /sharedDrive/primary_wal/%f %p'
    trigger_file = '/path/to/postgresql.trigger.5432'

    On Windows:

    standby_mode = on
    primary_conninfo = 'host=<ip_address_of_primary_postgres_server> port=5432 user=rep password=rep'
    restore_command = 'copy \\sharedDrive\primary_wal\%f %p'
    trigger_file = '\path\to\postgresql.trigger.5432'

    You may optionally setup archive cleanup. This would keep clearing the WAL files from the archive location as the changes are replicated successfully to the standby postgres server. Just append the below lines to recovery.conf

    On Linux:

    archive_cleanup_command = 'pg_archivecleanup /sharedDrive/primary_wal %r'

    On Windows:

    archive_cleanup_command = 'pg_archivecleanup \\sharedDrive\primary_wal %r'

    References for these options are at: Recovery Configuration.

  4. Restart the standby Postgres server.

Setup a standby (secondary) GoCD Server

Given that a standby Postgres instance has been setup for replication, we can now setup the standby GoCD Server, to use that standby Postgres instance. Since that Postgres instance will be in a read-only mode, the standby GoCD Server needs to be told to start itself in a read-only mode as well.

  1. Ensure that the version of the GoCD Server on the standby is the same as the version of that on the primary.
  2. Create a file business-continuity-token in the GoCD server config directory (usually /etc/go/ on Linux and <GoCD server installation dir>/config/ on Windows) on the primary server. Setup plain text business continuity token in this file, which will only be used for Business Continuity sync and login.

    Sample file:

        user = password
  3. Add business continuity addon jar to <GoCD installation folder>/addons folder.

  4. Get a base backup of the primary GoCD Server.

    * On Linux: Copy over entire config directory /etc/go/ and file /etc/default/go-server from primary server to the standby server.

    * On Windows: Copy over entire config directory <GoCD server installation dir>/config from primary server to the standby server.

  5. Setup to point to the standby Postgres instance. Usually this file is extremely similar to the /etc/go/ (on Linux) Or <GoCD server installation dir>/config/ (on Windows) file of the primary GoCD Server, with the database host changed to point to the standby Postgres instance.

  6. Start up the standby GoCD Server in passive state, by setting the system property go.server.mode to the value standby and the system property bc.primary.url to the base URL of the primary GoCD Server (for instance, https://primarygo:8154). So, your standby GoCD Server instance should be started with arguments such as:

    -Dgo.server.mode=standby -Dbc.primary.url="https://primarygo:8154"

    Edit the file wrapper-properties.conf on your GoCD server and add the following options (replace primarygo with the IP of your primary GoCD server). The location of the wrapper-properties.conf can be found in the installation documentation of the GoCD server.

    # We recommend that you begin with the index `100` and increment the index for each system property"https://primarygo:8154

    If you’re running on docker using one of the supported GoCD server images, set the environment variable GOCD_SERVER_JVM_OPTIONS, replacing primarygo with the IP of your primary GoCD server:

    docker run -e "GOCD_SERVER_JVM_OPTIONS=-Dgo.server.mode=standby -Dbc.primary.url="https://primarygo:8154" ...
  7. After you have completed all of the aforementioned steps and restarted standby GoCD server, login to the standby dashboard using the business continuity token you setup in Step2 above in business-continuity-token file.

  8. On successful login, you will be presented with a screen like this, or you can visit the URL https://<sec-server>:<port>/go/add-on/business-continuity/admin/dashboard to check the sync status :

    Figure 5: Standby GoCD Server - Done!

    This is the standby GoCD Server dashboard. It tells you about the state of the sync and automatically updates every few seconds.


1. Files which get synced

Notable files which don’t get synced are:

2. Other options and ideas

DNS setup for the virtual IP

If you have control over your organization’s DNS server, or can persuade an administrator with privileges to help, it is recommended to setup a DNS record pointing to the virtual IP so that any switches to the virtual IP, pointing from a primary GoCD Server to a standby GoCD Server will seamlessly work for all users.

Since the “value” of the virtual IP never changes, the DNS record does not need to have a low TTL (time to live).

Setup to ease changing of GoCD Server from standby to primary

Just like the Postgres recovery trigger file, you can setup a trigger file which helps you control whether a GoCD Server starts up in an active state or in a standby mode (the go.server.mode system property). In a startup file such as /etc/default/go-server, you can have a few lines such as:

if [ -e "/etc/go/" ]; then
  export GOCD_SERVER_JVM_OPTIONS="$GOCD_SERVER_JVM_OPTIONS -Dgo.server.mode=standby -Dbc.primary.url=https://primarygo:8154"

This will ensure that the GoCD Server starts up in standby mode only if the file /etc/go/ exists. When you want to switch this GoCD Server to become primary instead, you can remove this file, and the GoCD Server, upon restart, will not start in standby mode.