The big red-zones were poorly configured caching parameters in the HTTP response, no compression, non-progressive and uncompressed images.
Since then, I’ve learned a surprising amount about speed optimisation of websites of late, because of the big push towards a faster, more reliable Web. There are a range of particularly useful techniques I could choose to use, considering I essentially have a static website.
The first is, obviously, to find a CDN and use it as a hosting service. I’m oddly tempted by this option, but it’s rather fraught with opportunities. Many people suggest pushing assets into the cloud of CDNs, but I’m almost tempted to push the whole site in, just to see what happens. My bet: a moderate improvement.
Assuming I don’t want to buy a CDN (and I don’t), there are more options. The first, of course, is to find a better web server. Nginx is one of the main contenders for that for me; it’s easier to configure and theoretically faster out of the box; Apache’s baroque codebase takes a while to lumber up to speed.
I could also choose to add a caching proxy such as Squid or Varnish, but they are essentially overkill: they’ll do essentially the same task as a conventional web server because there isn’t any dynamic content to accelerate; if anything, it’ll probably perform worse.
There are a range of knobs that could easily applied to Apache httpd, or, indeed, any other web server. I’m currently digging into setting up Google’s
mod_pagespeed, which intelligently compresses and/or rewrites web content for speed. I’m also investigating enabling SPDY (which I always seem to mistype as spdf), which is surprisingly well supported.
Nonetheless, I have a few ideas I can potentially use, so we’ll see how we go.