viewBox="0 0 51 48">Five Pointed Star Five Pointed Star

Serving Multiple Domains with SSL Protocol Using Docker, LetsEncrypt & HAProxy on Ubuntu 14.04

Why Docker?

It is relatively simple to load balance docker containers with a proxy. You can repo docker images in a fashion like git. Docker images are completely portable. The bare metal can change. The OS can change. There is no VM overhead. Much less resources are taken by Node based servers, NGINX or Apache in a container than on the host OS directly. With CoreOS, clustered apps and enterprise required fallbacks are able to be automated and monitored. The concept of the technology is tested and true, with origins from UNIX jails.

Purpose

A walk-through of setting up a website, a Wordpress site, and a Ghost-driven blog on one server with different domains. Instead of running a server/proxy combination on the host (e.g., Apache/NGINX), taking up considerable resources, we are going to run our web servers in docker containers, then run a powerful load balancer and proxy -- HAProxy on the host OS. These are very low traffic sites -- on a major production site and/or production apps/APIs -- I would consider exploiting advanced load balancing functionality with HAProxy, clustering and the AWS cloud platform.

Server Setup

Let's get started and create a DigitalOcean server. I chose One-Click Apps and Docker, but you could choose Ubuntu 14.04 and install Docker on it.

Droplet Settings

Let's go ahead and follow the directions from DigitalOcean for initial setup on our new server.

When you add your user to the sudo group, repeat this step but add to the docker group

Ubuntu 14.04 Initial Server Setup

We are going to avoid repetitive typing of docker commands, so we'll need Docker Compose. We will be using version 1 syntax although version 2 syntax is supported on Docker Engine 1.6+. First we need python-pip.

sudo apt-get -y install python-pip  

Now install Docker Compose.

sudo pip install docker-compose  

If you want to check that it's working and learn some basic docker-compose commands navigate your browser here:

Install Docker Compose on Ubuntu 14.04

Existing Source Code

Since I was moving existing sites, I used git to pull my source into the droplet into a subdirectory within my home directory. Then I set up a source (src) subdirectory as well, careful to remove the .git directory, since we are going to create Docker volumes from the existing code, which the containers will use to serve content.

There is a good introductory overview of Docker Containers here: Docker Containers - Mark DuBois Weblog

Docker Container Configuration and Launch

For the static html/css/javascript site I chose the official apache container from Docker Hub. I created a docker compose file to accomplish configuration and pulling of the image. I did this in yet another subdirectory, ~/docker.

Any other files you have in a directory will be pulled into the image upon creation with Docker. My '~/docker' subdirectory contains only one file, the docker file, named docker-compose.yml.

cd ~/docker  
vim docker-compose.yml  

Insert the following code.

Pay attention to your whitespace since it is YAML.

apache:  
  image: httpd
  container_name: 'yourdomain.net'
  ports:
        - '127.0.0.1:80:80'
  environment:
        - VIRTUAL_HOST  'yourdomain.net,www.yourdomain.net'
  volumes:
        - '~/src/yourdomain.net/:/usr/local/apache2/htdocs/'
  restart: always

Launch the container in detached mode.

docker-compose up -d  

Now stop the container and delete it as well with one command.

This is a static site so data persistence isn't a concern yet. Obviously, completely destroying a container like this permanently loses any data stored in a database within that container.

docker-compose down  

Add this to the docker compose file.

ghost:  
  image: ghost
  container_name: 'ghost-blog'
  ports:
    - '127.0.0.1:2368:2368'
  environment:
    VIRTUAL_HOST: 'blog.info,www.blog.info'
    PUBLIC_URL: 'https://blog.info'
    NODE_ENV: production
  volumes:
    - '~/src/ghost-blog/content:/usr/src/ghost/content'
  restart: always

Notice I am just mapping ports to local host from the Docker container. So the second '2368' after the colon is the internal docker port the container is exposing by default. We can get ip addresses and ports of our docker containers with docker inspect container_id (reference the container by its ID, which we get from docker ps for running containers). But since we might take containers up and down, we don't know what the assigned IP for the recreated container will be, so we map it to an arbitrary localhost port.

We are using the volumes directive to inject our files from the host environment into the docker container.

Also note we are dealing with another domain.

For the Wordpress site, we'll need a MySQL database (Mariadb is a MySQL server docker image we can pull for this). We'll want it in a separate container, since we might have other sites that will use the server for their own databases. We'll want phpMyAdmin for database management and backups. We would never run PMA on an unencrypted connection, so realize there is no web server or proxy mechanism running on the host OS right now. So here's the next addition to the docker-compose file.

We'll also need to add this to our wp-config.php file.

define('FORCE_SSL_ADMIN', true);

if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')  
   $_SERVER['HTTPS']='on';

This will be required to avoid a redirect loop on the wp-admin login page later when running the proxy.

If you are moving a Wordpress site from http to https, you might consider choosing to build container from an image that also includes WP-CLI. Also, see this for general considerations: Moving to HTTPS on Wordpress

To make things easier, but with slightly less flexibility with the MySQL server, there is a solid docker image which bundles Wordpress, MySQL, and WP-CLI. docker-wordpress-cli. I stay away from images that aren't official releases, and only use images I've composed from official releases in order to keep my security paranoia in check. Composition of images are achieved via Dockerfiles. For more information you can read about them here: Dockerfile Reference -- also browsing the source of other's dockerfiles is a great way to learn and keep your images efficient (minimize layers on the container's filesystem).

Next, take down the running containers.

docker-compose down  

Back to our Docker Compose file. We need to append the following code.

wp:  
  image: wordpress
  container_name: 'wp'
  ports:
    - '127.0.0.1:82:80'
  links:
    - wordpress_db:mysql
  environment:
    - VIRTUAL_HOST  wpdomain.net,www.wpdomain.net
  volumes:
    - '~/src/wpdomain.net:/var/www/html'

wordpress_db:  
  image: mariadb
  container_name: 'mysql'
  ports:
    - '127.0.0.1:3306:3306'
  volumes:
    - '/some/path/to/data/on/host:/var/lib/mysql'
  environment:
    MYSQL_ROOT_PASSWORD: secret

phpmyadmin:  
  image: 'phpmyadmin/phpmyadmin'
  container_name: 'pma'
  links:
    - wordpress_db:mysql
  ports:
    - '127.0.0.0.1:8080:80'
  environment:
    MYSQL_USERNAME: root
    MYSQL_ROOT_PASSWORD: ******
    PMA_HOST: mysql
    PMA_ABSOLUTE_URI: https://pma.wpdomain.net
  restart: always

You can also replace PMA_HOST with PMA_ARBITRARY: 1 in the environment section under phpmyadmin, and then you can choose among linked hosts on the PMA login page. According to the docker image documentation on Docker Hub, we need to specify the PMA_ABSOLUTE_URI environment variable if we are behind a reverse proxy, which we will be soon enough.

You get the information on exposed docker container ports, and environmental variables from the image's documentation on Docker Hub. You can always get the directory structure, like the web server root directory files are being served from, by attaching a bash shell to the container like so:

docker exec -it <mycontainer_name> bash  

So here is our complete docker-compose.yml file to bring up our sites on different domains:

apache:  
  image: httpd
  container_name: 'yourdomain.net'
  ports:
        - '127.0.0.1:80:80'
  environment:
        - VIRTUAL_HOST  'yourdomain.net,www.yourdomain.net'
  volumes:
        - '~/src/yourdomain.net/:/usr/local/apache2/htdocs/'
  restart: always

ghost:  
  image: ghost
  container_name: 'ghost-blog'
  ports:
    - '127.0.0.1:2368:2368'
  environment:
    VIRTUAL_HOST: 'blog.info,www.blog.info'
    PUBLIC_URL: 'https://blog.info'
    NODE_ENV: production
  volumes:
    - '~/src/ghost-blog/content:/usr/src/ghost/content'
  restart: always

wp:  
  image: wordpress
  container_name: 'wp'
  ports:
    - '127.0.0.1:82:80'
  links:
    - wordpress_db:mysql
  environment:
    - VIRTUAL_HOST  wpdomain.net,www.wpdomain.net
  volumes:
    - '~/src/wpdomain.net:/var/www/html'

wordpress_db:  
  image: mariadb
  container_name: 'mysql'
  ports:
    - '127.0.0.1:3306:3306'
  volumes:
    - '/some/path/to/data/on/host:/var/lib/mysql'
  environment:
    MYSQL_ROOT_PASSWORD: secret

phpmyadmin:  
  image: 'phpmyadmin/phpmyadmin'
  container_name: 'pma'
  links:
    - wordpress_db:mysql
  ports:
    - '127.0.0.0.1:8080:80'
  environment:
    MYSQL_USERNAME: root
    MYSQL_ROOT_PASSWORD: ******
    PMA_HOST: mysql
    PMA_ABSOLUTE_URI: https://pma.wpdomain.net
  restart: always

We bring everything up and down with docker-compose up -d and docker-compose down as long as we are in the same directory as the docker-compose.yml file. We don't want any other files in that directory besides the YAML file because they would get injected into the docker container as well.

DNS A records

At this point let's change all of the associated domain's A records to point to our DigitalOcean droplet's public IP address. This should allow time for propagation while we setup the proxy and SSL support.

Installing Let's Encrypt

Conveniently, Let's Encrypt left beta a couple days ago. Huzzah! Let's get to work in our SSH session. First we'll get the LE source and place it into the /opt directory.

sudo apt-get update

sudo apt-get update

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt  

We'll need three (3) standalone certificates, one for each of our domains. LE uses a temporary small web server to validate your server's identity, so we must take down the static site listening on port 80 temporarily. But we don't need to destroy it, or take everything down in order to achieve this. We can just stop it with whatever container name you assigned it -- in this example the container's name was defined as 'yourdomain.net' in the docker compose file.

docker stop yourdomain.net  
cd /opt/letsencrypt  
./letsencrypt-auto certonly --standalone

Follow the prompts for the first domain. Don't forget to include a www version and any subdomains that might be required.

Repeat for each of the other two domains.

Let's check out the files. There will be four files for each domain.

  • cert.pem > domain certificate
  • chain.pem > LE chain certificate
  • fullchain.pem > cert.pem and chain.pem concatenated
  • privkey.pem > private key for the domains certificate
sudo ls /etc/letsencrypt/live/the_domain_name  

These are symbolic links to the actual files stored in /etc/letsencrypt/archive. I suggest making a backup.

HAProxy requires a concatenated file with the contents of fullchain.pem and privkey.pem. So we'll need to run the second command for each domain -- substituting each of our actual domains for 'yourdomain.net'.

sudo mkdir -p /etc/haproxy/certs

DOMAIN='yourdomain.net' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

sudo chmod -R go-rwx /etc/haproxy/certs  

Installing HAProxy

First we need the repository. Then we can use our package manager to install.

sudo add-apt-repository ppa:vbernat/haproxy-1.6

sudo apt-get update

sudo apt-get install haproxy  

Open haproxy.cfg.

sudo vim /etc/haproxy/haproxy.cfg  

In the global section add:

maxconn 2048  
tune.ssl.default-dh-param 2048  

In the defaults section add:

option forwardfor  
option http-server-close  

Now at the end of the file we need to define our frontend and backend connections for HTTPS traffic.

frontend www-http  
       bind <Public IP Address>:80
       reqadd X-Forwarded-Proto:\ http
       # default_backend www-backend
       acl host_static hdr(host) -i yourdomain.net
       acl host_ghost hdr(host) -i blog.info
       acl host_wp hdr(host) -i wpdomain.net
       acl host_pma hdr(host) -i pma.wp.net
       use_backend static if host_static
       use_backend ghost if host_ghost
       use_backend wp if host_wp
       use_backend pma if host_pma


frontend www-https  
   bind <Public IP address>:443 ssl crt /etc/haproxy/certs/
   reqadd X-Forwarded-Proto:\ https
   acl letsencrypt-acl path_beg /.well-known/acme-challenge/
   use_backend letsencrypt-backend if letsencrypt-acl
   #default_backend www-backend
   acl host_static hdr(host) -i yourdomain.net
   acl host_ghost hdr(host) -i blog.info
   acl host_wp hdr(host) -i wpdomain.net
   acl host_pma hdr(host) -i pma.wp.net
   use_backend static if host_static
   use_backend ghost if host_ghost
   use_backend wp if host_wp
   use_backend pma if host_pma

backend static  
 balance roundrobin
 option forwardfor
 option httpclose
 timeout queue 500000
 timeout server 500000
 timeout connect 500000
 redirect scheme https if !{ ssl_fc }
 server static 127.0.0.1:80

backend ghost  
 balance roundrobin
 option forwardfor
 option httpclose
 timeout queue 500000
 timeout server 500000
 timeout connect 500000
 redirect scheme https if !{ ssl_fc }
 server ghost 127.0.0.1:2368

backend wp  
 balance roundrobin
 option forwardfor
 option httpclose
 timeout queue 500000
 timeout server 500000
 timeout connect 500000
 redirect scheme https if !{ ssl_fc }
 server wp 127.0.0.1:82

backend pma  
 balance roundrobin
 option forwardfor
 option httpclose
 timeout queue 500000
 timeout server 500000
 timeout connect 500000
 redirect scheme https if !{ ssl_fc }
 server pma 127.0.0.1:8080

backend letsencrypt-backend  
   server letsencrypt 127.0.0.1:54321

Don't forget about the static site we want to expose on localhost port 80, but had stopped the container to free the port for Let's Encrypt's temporary server.

docker start yourdomain.net  
sudo service haproxy start  

Test all three domains here:

The goal is an "A" final score

SSL Server Test

Certificate Auto Renewal

cd /opt/letsencrypt

./letsencrypt-auto certonly --agree-tos --renew-by-default --standalone-supported-challenges http-01 --http-01-port 54321 -d yourdomain.net -d www.yourdomain.net

Include any subdomains you have setup as well.

We need a new combined certificate for the LE Server.

DOMAIN='yourdomain.net' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

sudo service haproxy reload  

Create a LE configuration file.

sudo cp /opt/letsencrypt/examples/cli.ini /usr/local/etc/le-renew-haproxy.ini

sudo vim /usr/local/etc/le-renew-haproxy.ini  
rsa-key-size = 4096

email = you@yourdomain.net

domains = yourdomain.net, www.yourdomain.net, blog.info, www.blog.info, wp.net, www.wp.net  

Uncomment the standalone-supported-challenges line, and replace its value with http-01. It should now look like below.

standalone-supported-challenges = http-01  

Let's now script the renewals and schedule with cron.

Every week cron will run our shell script and request renewal when expiration is less than 30 days away.

sudo curl -L -o /usr/local/sbin/le-renew-haproxy  
https://gist.githubusercontent.com/thisismitch/7c91e9b2b63f837a0c4b/raw/700cfe953e5d5e71e528baf20337198195606630/le-renew-haproxy

sudo chmod +x /usr/local/sbin/le-renew-haproxy

sudo crontab -e  
30 2 * * 1 /usr/local/sbin/le-renew-haproxy >> /var/log/le-renewal.log  

Backing up source and databases (issue: data persistence)

You can copy files from a container to a host directory like this:
docker cp <containerId>:/file/path/within/container /host/path/target

Source Stack Overflow

PMA can be used to make database backups. This is not an ideal workflow in my opinion, but not overwhelming. By using data volumes and an image repository, this is overcome as we will see in the following section.

Backing up Images and Committing/Pushing to Docker Repository on Docker Hub

Stop the running container you want to snapshot to an image. Run this command to get the container ID.

docker stop <container name>  
docker ps -a  

You'll see your container with a status of stopped. Note the container ID and realize this will tab autocomplete so you don't have to type the whole ID out.

docker commit <container id> deathwishcoffee/stuff:image1  

Where image1 is the tag

Create an account on Docker Hub. You get one free private repository. However we can stack all our images into one repository like so:

Source Stack Overflow

docker login

# Commit container to new image
docker commit image1 deathwishcoffee/stuff:image1  
# Push to dockerhub repo
docker push deathwishcoffee/stuff:image1

# Commit second container to new image
docker commit image2 deathwishcoffee/stuff:image2  
# Push to same dockerhub repo with different tag
docker push deathwishcoffee/stuff:image2  

Image Management

List images:

docker images  

Remove an image:

docker rmi the_image  

Where the_image = repository:tag

Kill and remove all images:

docker rm $(docker kill $(docker ps -aq))  

Gracefully shutdown images and remove them:

docker rm $(docker stop $(docker ps -aq))  

Conclusion

If everything went well you should have a backup of your source code and data with your updated source code in a git repository, a Docker Hub repository with several images you could pull to restore things to normal, and an encryption security score of an "A" on Qualys SSL Labs Server test on all domains and subdomains.

Possible technologies to explore further:

  • Using Cloudflare for DNS and taking advantage of SPDY
  • Running HAProxy in a docker container
  • Utilizing HAProxy's more advanced load balancing capabilities
  • Running on AWS since it is a more enterprise feasible solution
  • Clustering with CoreOS, another enterprise level technique
  • Familiarization with Docker Compose 2 syntax and conversion of our YAML files using the Compose file reference
  • Explore how to create a privately hosted private registry and repository for Docker images

Recommended Reading

This book is reasonably current as of this article posting and is a great reference, not only for functionality but also conceptual understanding of Docker: Using Docker -- O'Reilly Media


L. Ball
L. Ball

Father. Developer. Coffee Connoisseur. Amateur Guitarist.