So I have a Ghost blog running on top of Docker on a DigitalOcean cloud server. One fine day, I got an email from ghost announcing Ghost v2 is out. Since I’m a sucker for shiny new things I attempted to upgrade and see what’s the fuss is all about. But that’s not the main point.
As you may guessed from the title, I 1-uped the difficulty by attempting to add HTTPS support as well. I tried to do so when setting up the blog initially but decided setting it up was plenty and decided to call it a day. Furthermore, as Chrome doesn’t wanna play nice with non-HTTPS sites in the near future, now couldn’t have been a better time to implement HTTPS. This time around, I finally manage to get HTTPS running, or else this blog post wouldn’t exist right?
If you don’t really care about how it works and you just want to set it up, the files and setup instructions are up on myGithub.
Ghost and HTTPS
So the way Ghost supports HTTPS is via nginx. This is achieved by using a reverse proxy, where nginx would “redirect” a request to the corresponding web service by looking at the host URL. For example, you could have example1.com and example2.com hosting different services on the same machine pointing to the same IP address; and nginx would direct the request to the right service depending on the host URL of the request. It’s almost like magic!
The Barebones Docker + Ghost
To start off, we would use the official Docker image for Ghost. Since entering all the container parameters every time in the command line is a pain in the ass, we would compose a utilize Docker Compose by creating a docker-compose.yml file:
This basically starts the Ghost container, connects the container’s port 2368 with the host machine’s port 2368 and links the host machines working directory content folder with ghost/content folder, so if any data is created would persist even after the container is destroyed.
To spin up the instance, just run docker-compose up and when its done you should be able to access it on http://localhost:2368.
nginx - The reverse proxy handler
Next, we’re going to set up nginx. Now since we are using Docker, we might as well containerize nginx as well. jwilder’s nginx-proxy has provided a very useful nginx image that will detect all the Docker containers and automagically configure the nginx reverse proxy. All we need to do is to define the environmental variables for the containers that we would want to use nginx with.
In order to set up nginx, we would add the following onto our existing docker-compose.yml file:
In addition, we make additional modifications to the existing Ghost container:
If everything goes well, rerun docker-compose up and you should be able to connect to the Ghost instance from the Docker host by executing the following command:
$ curl -H "Host: blog.example.com" localhost:80
If you see a bunch of HTML tags, then great! Time to set up HTTPS.
Managing HTTPS certificates
In order to set up HTTPS, we need to obtain a certificate from a trusted provider. Let’s Encrypt has been providing free certificates for HTTPS, which is what we will be using. Unfortunately, the certs provided has a pretty short lifespan but with the magic of Docker we can spin up a container to automate this process as well. jcrs’ letsencrypt-nginx-proxy-companion, as the name suggests, complements the nginx-proxy image by issuing HTTPS certs that could be used by nginx.
NOTE: You’ll need an existing domain that supportsCAA.
So, we add another container in our existing docker-compose.yml file:
In addition, we need to make changes to our existing containers as well:
ghost: ... environment: - url=https://blog.example.com - VIRTUAL_HOST=blog.example.com - LETSENCRYPT_HOST=blog.example.com # Host you would like to use, typically same as VIRTUAL_HOST - LETSENCRYPT_EMAILemail@example.com # A valid email so Let's Encrypt could notify you when your certs are expiring when auto-renewal failed nginx_proxy: ... labels: # Allow letsencrypt container to identify the nginx_proxy container - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy" volumes: - /var/run/docker.sock:/tmp/docker.sock:ro - ./ssl_certs:/etc/nginx/certs:ro # Allow nginx to read generated certs - /etc/nginx/vhost.d # Directories that letsencrypt container needs to access - /usr/share/nginx/html # Directories that letsencrypt container needs to access
Time to see the final results! Rerun docker-compose up -d and allow the containers to work its magic, and if everything goes well you should be able to access your Ghost instance in your web browser with HTTPS via the host URL you defined earlier.
In the server where I host my Docker instances, I also have other projects hosted on the same server without using Docker. Hence, I have apache set up which is running natively on my server in order to direct incoming requests to the correct destination. This results in the need to go through 2 different reverse proxies to reach my ghost instance.
This works fine with port 80 HTTP but not port 443 HTTPS. As a workaround I disabled HTTPS on Apache, and allow HTTPS connections to connect directly to the nginx container. A proper fix seems to be to define the certs in the apache install instead of the nginx as mentioned in the Ubuntu forums. Seems workable and would like to make it happen soon…