As mentioned before, I’ve been doing a bunch of work recently with Caddy, and a number of web apps. These are my notes for using Caddy, and why I think I prefer using it over nginx from now on. I also share a sample Caddyfile config for showing how to use it instead of nginx.
First of all, what is Caddy?
Caddy is a relatively new web server that was first releaed in 2014, but over the last few years has become a really, really nice alternative to using Nginx, Apache, Traefik and so on.
It’s written in Golang, and is a simple-to-download binary with no dependencies. Both the developer experience and documentation is absolutely stellar.
It’s quite a bit newer than other well known web servers, and it has a number of quality of life features that make working with it a pleasure.
Ok, why would I consider using it over tried and tested tools like nginx and the rest?
To give an idea of why, I think it’s useful to look at the Caddyfile, the equivalent to an nginx config you might see in action in other places.
Here’s a production ready version of a config set up to server a directory full of files over https, with automatically updating TLS certificates.
{
cert_issuer acme
email my@emailaddress.com
}
my.website {
fileserver
}
Out of the box, you get nice HTTPS support (that auto-renews!), good instrumentation, sensible logging, and a nice fast server.
Because it’s written in Golang, it’s plenty fast for most cases, and it’s memory safe, so less likely to crash underload or start behaving weirdly. Also, a nice side effect of language is that it’s designed from the get go to make use of all the cores that modern computers have – so you can tap into all the computing power available by default. This means you don’t need to do much to tweak a server set up for it to work well.
Not convinced?
Let’s see how to do common task – to serve a site behind a reverse proxy, so:
- there’s a nice, easy to read hostname, like https://my.awesomesite.com, but really,
- you have a responses handed by a server in a language of your choice, listening on a high numbered port, on localhost – something like this:
http://localhost:9000
.
This is normally a bit of a pain, but assuming you already have the same config file above to renew HTTPS certificates, here’s what you add:
my.awesomesite.com {
log
reverse_proxy localhost:9000
}
By comparison, a more or less equivalent nginx config file, is likely to look like this:
# establish an upstream for a server to listen on the high port
upstream my_app {
server localhost:9000;
}
# then set up a server listening on https
server {
listen 443 ssl;
server_name my.awesomesite.com;
access_log /var/log/nginx/my.awesomesite.com-access-log;
error_log /var/log/nginx/my.awesomesite.com-error.log;
ssl_certificate /etc/letsencrypt/live/my.awesomesite.com;
ssl_certificate_key /etc/letsencrypt/live/my.awesomesite.com;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
keepalive_timeout 5;
client_max_body_size 128M;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_redirect redirect replacement;
proxy_pass http://my_app;
}
}
# don't forget to handle redirects for requests going to http://my.awesomesite.com
server {
if ($host = my.awesomesite.com) {
return 301 https://$host$request_uri;
}
listen :80;
server_name my.awesomesite.com;
return 404;
}
On top of that, for something like nginx, we’re going to also have to have a separate running cronjob to remember to update a short lived TLS certificate by running certbot
every few weeks. This is easy to forget, and frequently people never get around to automating it.
This looks a bit contrived – what does a working django config look like?
If you’re using something like whitenoise to handle serving static files, the second example really is ok to use. See why I like it now?
It’s fairly common with django apps to need to serve static files separately, like nginx is often set up to do, or serve uploaded media files from the same machine. To do that, you config would look something like this:
my.awesomesite {
handle_path /static/* {
root * ./staticfiles/
file_server
}
handle_path /media/* {
root * ./media/
file_server
}
reverse_proxy 127.0.0.1:8000
}
Here we’ve added two new blocks containing a few of handy directives. The first is handle_path, which works a bit like nginx’s location.
You define a path you want to match against using handle_path, and then for anything matching that, you define what you want to have happen inside it. In this case, we set the root to be a directory using the root directive, which we have previously full filled with all the collected static files generated by django with it’s collectstatic
command.
Finally and then we serve them using the file_server directive.
We also need to serve uploaded media files from /media
, so we follow largely the same steps again, but using media
instead of static
.
Here’s a lightly annotated example of a Caddyfile config file.
my.awesomesite {
# serve compiled, collected static assets
# created by `collectstatic` into ./staticfiles
# from the path ./static instead
handle_path /static/* {
root * ./staticfiles/
file_server
}
# serve uploaded media files from /media/
handle_path /media/* {
root * ./media/
file_server
}
# proxy the rest of the requests to gunicorn
reverse_proxy 127.0.0.1:8000
}
Another reason – a push towards proprietary versions of existing open source tools
One trend we’ve seen over the last decade has been an explosion of VC backed startups forming around open source projects. And increasingly, when you end up searching for information on how to do common tasks with the open project, you’ll come across information for the proprietary version of the tool, like Nginx Plus versus Nginx, and so on.
Lots of the newer features you’d really want to have are now only available in the commercial versions, or if they do, official documentation for the open source version suffers as time and money is poured into nudging people to using the paid enterprise versions instead, to help recoup the original VC investment.
This is one thing that I haven’t seen with Caddy – the open source version has all the nice features you’d want, and if you need something specific to your use case, that isn’t supported, there’s a clear path to get support and consulting help where you need it.
The downsides of using Caddy
Caddy is newer, and much less well known than Nginx, Apache or other servers, and while the documentation is stellar, I didn’t find a ready made dead-simple example of a Caddyfile config for serving a django site, with static and media files.
Which is partly why I ended up writing this post, really – I know I’ll come back to it in 6 months when I’ve forgot how to set it on a new project.
If you really, really, really are doing huge amounts of traffic, it might be worth knowing that in absolute terms, Nginx is still quite a bit faster tha Caddy, as this exhaustive performance comparison suggests.
But for the rest of us, or if you’ve ever lost an hour to wrestling with Nginx and Apache config files, and you value ease of maintenance over all out absolute maximum performance, Caddy is pretty good, and a good addition to the devops toolbox.
Update: I learned some new things about Django, Caddy and HTTPS when updating an application recently. If you use them together, it might be helpful – see the link to the TIL.