Let's Encrypt, NGINX and Ghost

As I mumbled in my last post, I've got this site running over HTTPS thanks to an SSL certificate from Let's Encrypt. I used this rather splendid guide over at Digital Ocean to get things going, but until today I hadn't finished things off - I would have to renew the certificate manually, and more importantly, remember to do it.

Like many others running Node apps on the open web, I've got NGINX acting as a gatekeeper for Ghost. This is a well-trodden path, and means that you can have best-of-breed server technology at the outside edge, passing requests through to Node. NGINX provides a far simpler means to supporting HTTPS than trying to get it into Node; and can farm the secure requests to the locally running app through plain HTTP. The Node app can then be set to only respond to localhost on whatever port it fancies (for Ghost, port 2368 is default), and NGINX proxies this.

To manage the running of Ghost, I'm using PM2, which is flipping brilliant. I recommend that for any Node apps that you want to keep running. What's particularly good about this is that I can maintain Ghost running under my own account without having to mess about with root. I only need elevated privileges for configuring NGINX, which is fine.

Now then, automatic certificate renewal. Let's Encrypt verifies a domain by making an HTTP request to a URL agreed with the command-line client. The server that handles the response could be internal to the Let's Encrypt client, but this means that you have to stop NGINX to release port 80. I didn't want to do that. The alternative is to let Let's Encrypt drop a file to a known location in your webroot, which would be fine if all traffic weren't getting pushed to Ghost. Further than that, I've got all HTTP traffic being pushed straight to HTTPS, because I can't see any reason to leave insecure HTTP in place, frankly. So, what to do?

Well, I found that Let's Encrypt wanted to look for something in the /.well-known directory, so I'd need to provide a mechanism for the client to write its temporary file here. Rather than mess about trying to achieve this Ghost, I decided to get NGINX to exclude this folder from its proxying (but only on HTTP). This took a bit of playing - NGINX configuration isn't my forte, plus I have a family to think of so my ability to dedicate time to figuring things out was broken up into many pieces, but I finally managed it.

Behold, my NGINX configuration!

server {  
    listen 443 ssl;

    server_name twindx.com www.twindx.com;

    add_header X-Clacks-Overhead "GNU Terry Pratchett";

    ssl_certificate /etc/letsencrypt/live/twindx.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/twindx.com/privkey.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

    root /usr/share/nginx/html;
    index index.html index.htm;

    client_max_body_size 10G;

    location / {
        proxy_pass http://localhost:2368;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_buffering off;
    }
}

server {  
    listen 80;
    server_name twindx.com www.twindx.com;

    location /.well-known {
        root /usr/share/nginx/html;
    }

    location / {
        rewrite ^/(.*)$ https://twindx.com/$1;
    }
}

So, what've we got? The first server section (listen 443;) is all about the secure setup, and is all pretty standard for using NGINX with Node (bar the X-Clacks-Overhead header). The second server section (listen 80;) originally pushed everything straight to HTTPS, and this is still happening. Now though, we're also checking for /.well-known URLs, and letting NGINX handle them in a default way.

BOOM! Now the mechanism described in the Digital Ocean article works perfectly, and I can let things tick over (which is good, given how bad I am at running a blog nowadays).