Ansible + Github Webhooks: Automating App Deployment

Executing an Ansible Playbook on server via Github Webhooks

A totally automated deployment process for web applications can be achieved with Ansible playbooks used in conjunction with Gtihub Webhooks. This article will talk through the process of setting up such a solution with the purpose of building and deploying a Javascript project on a remote production server, where the project is cloned, installed, and built, before replacing the live app.

In the previous article we created a playbook for exactly this purpose to be run locally on a Mac, that acts as an introductory to using Ansible. The playbook environment used in this article will be slightly different, optimised to be run on a remote server, building upon the setup we previously. That article will be linked in certain places here where referred to, but can also be found here:

The high level workflow of our entirely automated solution starts when you perform a git push to your master branch, described as follows:

# automated deployment process-> perform a git push (to master branch)

-> `push` event webhook delivery to deployment server
-> ansible deployment playbook is run: -> connects to production servers
-> fetches latest commit and builds project
-> moves build to live HTTP directory
-> end of playbook

Every time a push is made to the master branch, details of the event will be delivered to our deployment server, which will trigger an Ansible playbook to run. The playbook will be configured to connect via SSH to your production servers, fetch the latest master commit from Github, build the project, and move that build into your live directory.

This process will be broken down and documented with the following tasks:

  • Setting up your Github personal access keys, to connect to the Github API as an individual account
  • Installing an Express server on a “deployment” server to handle Webhook payloads
  • Programatically creating and testing Webhooks respectively, relying on the Github API with the @octokit/rest package — a REST API client for Github. We’ll also cover how to use the Github UI to set up Webhooks
  • Generating an SSH key on the deployment server and adding it to your Github SSH and GPG Keys. This key will be forwarded by Ansible to the remote hosts as the playbook is run. We also need to add your production servers to your known_hosts, and perform ssh-copy-id for logging in without the need for inputting authentication
  • Setting up a (Python) virtual environment and installing Ansible, configure the environment and include the playbook to be run. We’ll also use an Ansible Vault to store the production server’s SSH password. At this stage we can test the playbook in the Terminal to ensure it is running without errors
  • Now we can include an exec() call within the Express server’s Webhook route, that will execute a command to run the Ansible playbook. Within exec() we will set an environment variable specifying our own Ansible configuration file, as well as use absolute paths to necessary configurations

Note: you may wish to have a specific branch for automated deployments, such as a production branch, but for this talk we will stick with master.

For this setup we will be adopting multiple VPSs / Instances / Droplets — which ever VPS solution you are comfortable using.

One VPS will be the deployment server, that will receive Github Webhooks and host the Ansible playbook and SSH key. When this playbook runs, it will connect to your production servers to perform its tasks:

# server communicationgithub
| webhook payload
deployment server
| playbook via SSH
production server(s)
clone github repository
install dependencies
build app
replace live app build

It will be assumed that you already have a VPS deployed for deployment, and other production VPS instances (if not, check out the previous article to get insight on updating a production server via a playbook).

Setting up the Deployment Server

This title size should be much bigger — there is a bit of setup required to get all the cogs of the deployment server working together. Nonetheless, all will be documented and explained, consisting of:

  • Getting your Github credentials
  • Setting up an express server that will process Webhooks from Github, along with an Nginx reverse proxy to route those requests to it
  • Ansible virtual environment setup
  • SSH key setup and adding remote hosts as known hosts

Remember, this is our deployment server, whose role is to orchestrate the deployment process to your remote hosts.

Visit the Personal Access Tokens section on Github (under Settings -> Developer Settings), and generate a new access token. This will be used with @octokit/rest further down to make authenticated requests to the Github API

We will be utilising an express server to serve Github Webhook HTTP requests. We will not need to change much from the default express generator boilerplate, only needing one route to handle Webhook payloads. Requests will be reverse-proxied via an Nginx server configuration, that will also be documented.

Note: The below commands utilise yum in a CentOS server environment, be sure to use what is suitable for your chosen Linux distribution.

# setting up deployment server# install nginx
sudo yum install nginx
# install sshpass: SSH utility for non-interactive SSH login
sudo yum install sshpass
# install express
yarn add express
yarn global add express-generator@4
# create folder for deployment
sudo mkdir /var/deployment
# change permission to your user
sudo chown -R <your_username> /var/deployment
# generate express server & install dependencies
express /var/deployment/server && cd /var/deployment/server
#install octokit/rest (Github API client)
yarn add @octokit/rest
# start and enable nginx
sudo service start nginx && sudo service enable nginx

I have opted for the /var/deployment directory to store our project files, but ultimately this location does not matter too much. Our Express server is sitting in /var/deployment/server.

Great, most dependencies relating to server setup are installed already. Let’s define an Nginx reverse-proxy now to route requests to the Express server over an encrypted SSL connection. Placeholders are in bolded angle brackets. Copying and pasting the following and replacing the bolded angle bracket values will work:

Note: This configuration file will be placed in the conf.d directory of Nginx, just like any other server configuration.

# /etc/nginx/conf.d/github.deployment.confserver {
listen 443 ssl;
listen [::]:443 ssl;
server_name <your domain>;
ssl_certificate <your_crt>;
ssl_certificate_key <your_key>;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate <your_ca_bundle>;
location /github-api {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3003/;

Restart Nginx to apply the updated configuration:

sudo service nginx restart

We’ll be needing our Express server running in the background — reliably. This means auto restarts when a crash occurs, auto reloading as code changes come in, and auto boot when server restarts. We can use PM2 for this, that can be installed as another global NPM package:

# install pm2 and run express server in backgroundyarn global add pm2# load on boot
pm2 startup
> a startup command will be output to the terminal - execute this command now
# start express server as a process on a dedicated port
PORT=3001 pm2 start /var/deployment/server/bin/www --watch --name 'Github Deployment'
# save pm2 list
pm2 save
# verify server is running
pm2 list

Note: The --watch flag has been added to the pm2 start command, meaning the process will reload if changes are made to the code. The PORT environment variable has been defined beforehand too so our server listens on this port.

The server is now running and listening on port 3001. Why this port? Well, port 3000 is the default port for a range of services related to Javascript (CRA, Express, SocketIO, etc…), and choosing a different one insures there will be less likelihood of conflicts in the future.

All that’s left to do is configure our Webhook route, at routes/index.js. We’ll do this once the Ansible playbook is in place.

Now let’s create a virtual environment directory where our Ansible playbook will reside. The following commands install Python 3.6 along with Pip, before installing virtualenv and initialising it:

# install python and pip
sudo yum install python36 python-pip
sudo pip install -U pip
sudo pip install -U virtualenv
# initialise virtualenv
virtualenv --system-site-packages -p python3.6 /var/deployment/ansible
# go to directory & activate environment
cd /var/deployment/ansible && source bin/activate
# install ansible
pip install ansible
# create directory for storing playbook files
mkdir playbook

Great, our deployment server is now looking a lot more capable, with our Express server running and dedicated Python environment for Ansible initiated. We now have the following setup:

# deployment folder structure/var
# express server
# virtual environment
# playbook specific files will go here

Let’s now move on to setting up some SSH.

SSH plays a big role in Ansible automation. We’ll use SSH as a means of connecting to remote production servers, as well as connecting to Github to clone the project(to be deployed)’s repository.

I documented the Github SSH key setup process in detail in the previous article, so I’ll just summarise the process here. Generate a new key now and open the resulting key file to copy its contents:

# generate ssh key for github
ssh-keygen -t rsa -b 4096 -C "<your_github_email_address>"
# no passphrase
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
# open the key file
less ~/.ssh/

Note: I have opted to leave out a key passphrase here, as Ansible will not be configured to handle this prompt when authenticating with the key. However, it is worth stressing that Ansible is absolutely capable of handling arbitrary prompts (check out the expect module) that may occur throughout playbook execution, that you can explicitly predict and handle within the playbook itself.

Now navigate to your Github SSH and GPG Keys page and visit Add SSH Key to add the key. Ansible will later on utilise SSH agent forwarding to use this key for all production servers — there is no need to generate more keys for each of them.

Now add each of your production servers as known hosts, ensuring no “trusted host” dialogues show up when attempting to connect to them when the playbook runs:

ssh-keyscan -H <production_server_ip> >> ~/.ssh/known_hosts

For the purpose of logging in to your production servers without password prompts, we’ll also utilise OpenSSH to copy your public key. Do this for each of your production servers:

# execute on deployment serverssh-copy-id <user>@<ip_address>
> input remote host password
# test connection
ssh <user>@<ip_address>

Upon testing your connections you should be logged in within a split second, verifying all is working.

Adding Github Webhook

Setting up the Webhook is simple enough —it can be done within the Settings -> Webhooks section of any repository. We can either do this via the Github UI, or do it programatically with @octokit/rest.

  • Your payload URL should be a HTTPS URL, with the SSL Verification option turned on. Our Nginx configuration earlier defined a location of /github-api. If you’re following along, your payload URL would be https://<your_domain>/github-api
  • Ensure the content type is application/json. Express will handle this data in request.body
  • Include a long random string as a Secret. This adds an additional mechanism of verifying that Github is indeed sending requests to us
  • Ensure active is checked. We’re ready to go with this Webhook

What if we wanted to create a Webhook programatically using the Github API? I’m glad you asked. I have created a Github Gist to do just this available here. You authenticate using your personal access token, before calling octokit.repos.createHook:

// snippet from create-webhook.js Gist - view full Gist here...async function createPushWebhook (conf) {  const octokit = new Octokit({
auth: personalAccessToken
const res = await octokit.repos.createHook({
owner: conf.owner,
repo: conf.repo,
name: 'web',
config: {
url: conf.url,
content_type: 'json',
secret: webhookSecret,
insecure_ssl: 0,
events: [
}).catch(e => {
console.log('success. Hook id: ' +;

Store the script at /var/deployment/server and run with node to create the Webhook:

# create webhook programaticallynode create-webhook.js

I have also created another Gist to test the Webhook that can be found here. This script supports a hook_id argument so you can easily test any Webhook at the command level. The API method used is octokit.repos.testPushHook:

// snippet from test-webhook.js Gist - view full Gist here...
const res = await octokit.repos.testPushHook({
owner: conf.owner,
repo: conf.repo,
hook_id: conf.hook_id
.catch(e => {

To use this script, execute it with a hook_id argument:

node test-webhook.js --hook_id '123456'

With an active Webhook deployed we can focus now on the Ansible playbook.

Setting up Ansible Playbook

In this section we will be taking the playbook written in the previous article (that also covers Ansible Vault) and applying it to a server environment — being our deployment server. We’ll be storing 4 files within our virtual environment playbook folder set up earlier:

# ansible file structure/var/deployment/ansible

We’ve separated all Ansible related files within the playbook folder. Let’s break down these files.

An Ansible Configuration file, named ansible.cfg. This file simply overwrites some defaults to enable SSH agent forwarding:

# ansible.cfg[defaults]
transport = ssh
ssh_args = -o ForwardAgent=yes

A group_vars folder with all.yml within. Ansible recognises group_vars as a directory to store variables on a per host basis, the filename being the host name. In our case, all.yml reflects that the variables are available for all our defined hosts. Within this folder we will store an encrypted Ansible Vault generated password for your production SSH passwords:

# generating encryped password with Ansible Vault# store vault passphrase
echo '<vault_passphrase>' ~/.ansible-vault-pw
# encrypt production ssh password
ansible-vault encrypt_string \
--vault-id user@~/.ansible-vault-pw \
'<ssh_password>' \
--name 'production_server1_password
# encrypted password will be output
> production_server1_password: !vault |

Replace the bolded text with your own Vault passphrase, username and SSH password respectively. I’ve named the password variable production_server1_password, but this can be changed to what is suitable for your setup.

Copy the entire encrypted password output by ansible-vault and paste it in group_vars/all.yml. This is the only group variable we’ll need.

Every playbook requires an inventory file that lists the remote hosts to connect to. Our inventory file consists of one host — your production app server:

ansible_connection: ssh
ansible_host: "<server_ip_address>"
ansible_user: <user>
ansible_password: production_server1_password

Note: Remember earlier that we have added this server to known_hosts, and have used ssh-copy-id to set up automated authentication to the server.

playbook.yml is exactly the same playbook we discussed in the previous article. For convenience, I have created a Gist to refer to:

# Github -> Production Deployment
# Change vars to point to your own folders
# These folders should exist on your production servers
- name: React App Deployment from Github Repository
connection: ssh
gather_facts: false
hosts: all

# full gist available here

Now you will be able to run the playbook in the Terminal, from your deployment server. Run now to ensure the playbook executes as expected, with a high level of verbosity to pinpoint any issues:

# test Ansible playbook# ensure virtual environment is activated
cd /var/deployment/ansible && source bin/activate
# go to playbook directory
cd playbook
# run playbook
ansible-playbook -i inventory.yml --vault-id user@/home/user/.ansible-vault-pw playbook.yml -vvv

Note: I have deliberately used absolute file paths here. Make sure your Vault passphrase is pointing to the correct user folder.

All that is left to do now is to run this playbook automatically as a Webhook comes in. Let’s visit this next.

Programatically Executing Playbook via Express Route

Jumping back to our Express server, we are interested in amending the index route, at routes/index.js:

// /var/deployment/server/routes/'/', async function (req, res, next) {

This is where all Webhooks will be routed to. Let’s expand this to only process push events from the master branch firstly:

if (req.body.ref !== 'refs/heads/master') {
console.log('not master branch. ignore');

In order to execute the Ansible command, include the following packages, that will bring exec() into scope, and wrap it in a Promise based function so we can call it asynchronously. :

var express = require('express');
var router = express.Router();
const util = require('util');
const exec = util.promisify(require('child_process').exec);

This will allow the playbook to run asynchronously as a promise, allowing the Webhook to resolve immediately — letting Github know the Webhook was successfully received. This also removes the possibility of a timeout in the event a playbook executes synchronously and takes a long time to complete.

We will now be able to run the ansible-playbook with exec(). This is the command in its entirety, firstly defining an ANSIBLE_CONFIG environment variable, before calling the ansible-playbook binary from our vritual environment:

// run Ansible scriptconsole.log('executing deployment...');exec('ANSIBLE_CONFIG=/var/deployment/ansible/playbook/ansible.cfg /var/deployment/ansible/bin/ansible-playbook -i /var/deployment/ansible/playbook/inventory.yml --vault-id user@/home/user/.ansible-vault-pw /var/deployment/ansible/playbook/playbook.yml');
  • When ansible-playbook is called, it will look for the ANSIBLE_CONFIG environment variable for any configuration files. Without defining this, our ansible.cfg file would not be picked up
  • We are calling ansible-playbook from within our virtual environment. The binary is located at /var/deployment/ansible/bin/ansible-playbook
  • All other arguments are given on an absolute level. This is required as there will be no folder context from our PM2 process
  • Note that user has been used again for the Vault user — this can be amended based on your user setup

Finally, we could wrap everything in a try catch block, and return a response to let Github know the Webhook was successfully received. This is the full implementation of the route:

var express = require('express');
var router = express.Router();
const util = require('util');
const exec = util.promisify(require('child_process').exec);'/', async function (req, res, next) { try {
if (req.body.ref !== 'refs/heads/master') {
console.log('not master branch. ignore');
// run Ansible script
console.log('executing deployment...');
exec('ANSIBLE_CONFIG=/var/deployment/ansible/playbook/ansible.cfg /var/deployment/ansible/bin/ansible-playbook -i /var/deployment/ansible/playbook/inventory.yml --vault-id user@/home/user/.ansible-vault-pw /var/deployment/ansible/playbook/playbook.yml');
} catch (e) {
res.json({ received: true });
module.exports = router;


This concludes our deployment server setup, that successfully processes Webhooks from Github to trigger an Ansible playbook to deploy an updated app on your production servers.

From here you may wish to look into other mechanisms to expand this concept. These are a few areas to look into:

  • A queueing mechanism, whereby pull events that happen in quick succession can be queued an executed in order
  • Allow the playbook to set up your production server repository folder structure, for cloning and building. Right now we are relying those folders to already exist on the production servers
  • Expand the playbook to support multiple repositories, and other branches apart from master. Your list of repositories could be group in group_vars, as well as the corresponding remote hosts
  • A logging system to record which requests were successfully executed, and which fail for whatever reason

There are many other ways Ansible playbooks can facilitate your app ecosystem — this was just one example. I hope this talk has given you enough insight to generate other ideas for automating your workflow!

Programmer and Author. Director @ Creator of for iOS.

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store