Operational Documentation

This is a replication for the IHTSDO GitHub wiki - https://github.com/IHTSDO/ops-docs/wiki

This site contains documentation for deploying the release management system and operational documentation. 


Services quick links

Third party software documentation


The build and release process is broken into a number of distinct stages.

Release stages


The build process is run by Jenkins and Maven, with Jenkins triggering the build and controlling progress through the stages, and Maven doing the actual build.

Source is stored at GitHub. Most repositories are public, with a few exceptions. Jenkins needs write access to the repositories, especially for the release process.

As was as the usual collection of jars, Debian native packages are produced using thejdeb Maven plugin.

The resulting artifacts are deployed to Sonatype Nexus. Nexus has an apt plugin, which generated apt metadata from deb packages uploaded to repositories.

Packaging sequence

Code analysis

A job should exist that, each night, runs SonarQube against a code base, producing a code quality report.

Build Lifecycle

The broad life cycle runs as follows:

  • Configure the project master pom.xml to use jgitflow.
  • Configure GitHub repository to allow the 'Cloud Build Servers' access.
  • Configure maven to produce .deb files via the jdeb plugin. See the SNOMED Release Service API pom.xml for an example
  • Create the src/deb tree containing the supervisor.conf to start the app and the control directory containing the control file, pre and port scripts. Again see theSNOMED Release Service API for examples.
  • These two combined should produce a working .deb file which is self contained and listens on a given port
  • Set up SNAPSHOT and RELEASE jobs in Jenkins by cloning existing jobs and editing. Be sure to check advanced configuration options as they be hidden in the default view. GitHub hooks will be automatically configured.
  • Upon running the jobs, the deb packages should be produced and all artifacts uploaded to the Nexus Repository.

Deploy life cycle

  • In the IHTSDO ansible repository and associated inventory, copy an existing role to a new folder and edit for the new application (in particular, check default and naming).
  • Add new groups the the inventory files with appropriate configuration.
  • On the Ansible Jenkins server clone an existing job and edit as required.

Adding a new server

  • Create the server in Digital Ocean's control panel, adding your SSH key to the server.
  • Edit the appropriate ansible inventory files to add the new server
  • Run: ansible-playbook -i inventory/INVENTORY_FILE system_setup.yml -u root
  • Run the appropriate playbook, e.g. ansible-playbook -i inventory/INVENTORY_FILE snomed_release_service.yml

Some notes on Google Compute vs Digital Ocean

For the first deployment on a Google Compute instance, some issues were encountered. If these turn out to be a regular occurance, then we could automate the work-arounds in system-setup.yml above.

  • Digital Ocean allows users to log on as root by adding ssh keys to /root/.ssh in contrast, Google Compute creates individual users eg ansible. Because this newly created user conflicts with the user that ansible attempts to add, it is necessary to log onto the box and change the uid:
sudo usermod -u 4012 ansible
#...and chown their home directory if required:  
cd /home
sudo chown -R ansible:ansible ansible
  • The OS Debian image loading in Google Compute did not appear to support https out of the box which is a problem for picking up packages. The following command should also be run:
sudo apt-get install apt-transport-https
  • Additionally because Google Compute does not set up access to the root account, the /root/.ssh folder is missing and this causes the ansible setup script to fail, so create it first:
mkdir /root/.ssh
chmod 700 /root/.ssh

Further investigation is needed as to what happens to the keys here if users attempt to ssh in as root directly.

  • Not specific to Google Compute, but there is a first time 'gotcha' that ssh connections need an additional step of accepting the RSA Fingerprint of a server before they will automatically connect (the user must respond 'yes'). If these setup steps are ever automated, ssh connections to known servers should be initialised. An example of this is the SSH tunnel to the ID-Gen Service used by the SRS.
ssh-keyscan -H someIpOrHostname >> ~/.ssh/known_hosts



Search this documentation

Space Pages