Journal
Migrating the Cushion website to S3 to avoid dealing with SSL
I have off today, so I took a break from work to work on my side projects—a classic Jonnie move. One item that’s been on my list for a while was upgrading Certbot on Cushion’s marketing site server, so it doesn’t stop renewing the SSL certificate in July when Let’s Encrypt sunsets ACME v1. I was naive to believe it would be as easy as apt-get install certbot
, but this sent me down a rabbit hole which involved my SSH public key on my new computer not working with the server, discovering that the server is actually Debian (not Ubuntu), and inevitably deciding it would be more worthwhile to let Cloudfront handle the SSL.
Murphy’s Law is in full effect today as Cloudfront won’t let me point a distribution at the server’s IP address, so I try to create an A record for the IP, then point Cloudfront at the domain name, which does work when visiting the Cloudfront URL. When I point the Cushion domain record to the Cloudfront distribution, it throws a “Too many redirects” error. I spin my wheels for another hour and decide it’d be easier to migrate everything to static on S3. My “day off” is now half over and I still need to migrate everything.
I discover a useful binary called httrack, which can scape any website to a local folder. Again, I naively think it’d be as easy as advertised, but it’s an uphill battle because httrack converts all routes without .html
to HTML files with the route name as the base name, which won’t work with S3 because it needs those to be index.html
files within a directory of the route name. Digging through ancient docs and forums, I finally discover a way to make it work (-N "%h%p/%n/index.html" --preserve
), but this ends up applying to all files, so I need to exclude non-html files and pull those down in a separate step.
I finally get the entire site in a local directory, so I upload that to my S3 bucket and create the Cloudfront distribution. Losing patience, I hastily point the A record at Cloudfront too soon, which results in it not working when it should. I remember this being a thing, so I remove the A record and recreate it to double-check. It now works, but the root shows an Access Denied
error. The S3 URL works, but Cloudfront doesn’t. After digging through a handful of semi-relevant Stack Overflow answers, I realize I forgot to include index.html
as the “Default Root Object” in my Cloudfront distribution. Everything finally works as expected, but I now realize I have a few dozen redirects I forgot to migrate over as well. Of course.
Looking into creating redirects in S3, I remember how painful it is to create redirects in S3. In typical AWS fashion, redirects are handled through an XML syntax that lets you map a “key” to a replacement key. This is fine for most of the redirects, but then I get to the wildcard redirects, which are simply not possible in S3. Cool. Looking closer at the redirects, I realize that the wildcards are pointing to another S3 bucket, so I move those files over to the new bucket and it just works. I go to test the redirects and while they do redirect, they replace the domain name with the long S3 host name. Luckily, I was only missing a <HostName> node in the redirect syntax.
It’s now 6:47pm. Cushion’s site has been successfully migrated to S3 with its SSL certificate expected to renew automatically from now on. In the end, I sacrificed my day off to remove a lingering weight off my shoulders that I’d eventually need to face. I do think it was worth it, but I don’t think I’ll ever graduate from these hidden time sucks.