Minify, Compress, and Containerize Your Astro Site
/ 5 min read
In this note we’ll walk through the steps to minify, compress, and containerize your static Astro site. We’ll be pre-compressing the static files that Astro generates with brotli compression, the current gold standard in web compression, and we’ll use a version of nginx with the brotli module pre-bundled to serve the site as a Docker container1.
why minify and pre-compress?
Speed, obviously. If you see a recommendation in your site’s lighthouse report to “enable text compression,” gzip or brotli compression is how you address that. Many servers, including nginx, have the ability to compress static files on-the-fly, but pre-compression reduces the server’s workload and makes it so that clients don’t have to wait for the server to compress files on-demand2.
Minification generally has a smaller impact on performance, especially in conjunction with compression, but it’s still a good idea. It’s easy enough to implement in Astro static builds that there’s no reason not to do it.
step 1: install & configure minification
@playform/compress is a plugin for Astro that minifies HTML, CSS, SVG, Images, and JS. Despite the name, this package only really compresses images. All other file types handled are just minified, which is to say that whitespace and comments are removed to reduce file size.
Install the integration with astro add
(using your package manager of choice):
Ensure that this integration appears LAST in the integrations
array in your astro.config.ts
file. Here are the settings that I use and some comments to help you customize them:
step 2: install & configure file compression
astro-compressor is a plugin for Astro that applies brotli and gzip compression to the static file outputs of your Astro build.
Install the integration with astro add
:
Ensure this integration appears LAST in the integrations
array in your astro.config.ts
file (AFTER @playform/compress
, since we want to minify before compression). Here are the settings that I use:
The nginx docker image that I use doesn’t serve pre-compressed gzip files, only pre-compressed brotli files, so I’ve disabled gzip compression. Brotli compresses about 20% better than gzip and is supported by all modern browsers.
The only common scenario where you might need gzip intead of brotli is if you’re hosting a site on a local network without SSL/TLS, since brotli compression only works over HTTPS connections. If a client or connection doesn’t support brotli, nginx will still fall back to on-the-fly gzip compression, which is fine for a development/preview environment.
step 3: build your docker image
This step assumes some familiarity with docker. I’m using a multi-stage build, with a base stage for building the static files and a runtime stage for serving them with nginx.
conclusion & next steps
My setup is undeniably niche; running a static site from a docker container doesn’t make sense unless you already have a server that you’re using to host more complex full-stack applications. And once you do have your now-optimized docker container, you still need to integrate it with additional infrastructure to handle routing and SSL termination if you want to serve your blog over HTTPS at a domain name of your choosing.
My next post outlines how I serve my blog from a docker container at aaronjbecker.com using docker compose and traefik, as well as how I deploy it using a bare-bones bash script with SSH agent forwarding.
Footnotes
I can’t speak to how relevant these techniques are if you deploy on a platform like Vercel or Netlify. A quick search suggests that they already compress static files served on their CDNs, but that all of this compression is handled on the fly. ↩
In many cases pre-compression also allows you to opt for a higher compression level than would be optimal for on-the-fly compression. This can dramatically improve performance for very large files. I’ve seen a 32mb JSON file compress to 2.5mb with brotli compression level 11, which you would never wait for on-the-fly since it takes ~3 minutes. As an added bonus, files that are more intensively compressed are not only much smaller, but also decompress faster on the client side. ↩