A New Website Design is Coming

Yes, a new website design is coming..

Want to learn more about Docker?

Are you tired of hearing how "simple" it is to deploy apps with Docker Compose, because your experience is more one of frustration? Have you read countless blog posts and forum threads that promised to teach you how to deploy apps with Docker Compose, only for one or more essential steps to be missing, outdated, or broken?

Check it out

Recently, I’ve been working on a better approach to my online teaching efforts, outside of the work that I do for Twilio.

To put that into content, if this is the first article of mine that you’ve read, I have a number of efforts that I started over the last 10+ years, with the intention of being the best teacher that I could. These are:

  • This site
  • Web Dev with Matt
  • Two podcasts (Free the Geek and The Web Dev with Matt podcast)
  • Two Linked accounts
  • Two Twitter accounts
  • An Instagram account
  • Several Facebook pages
  • In addition to that, I’ve written a set of books, and authored a series of courses for places such as PluralSight.

These efforts started off with the best of intentions, in the sincere belief that I could maintain them all, consistently, to a high professional standard. However, it wasn’t to be.

Along with a full-time role at Twilio (and before that running a business as a freelance software engineer) I wasn’t fully appreciative of just how much time and effort all of these endeavours would require.

Given that, I’ve been doing a lot of thinking about it all, of late and accepted that:

  • There is significant overlap/duplication
  • I don’t have the time nor desire to maintain them all
  • A lot of them have shown only a small return for the effort invested

Given that, recently I’ve started closing quite a number of them (or marking as closed). My intent is to have a clear, singular focus so that I can do that well, and not attempt to do a number of things partly well.

One of the things that is marked for removal is https://webdevwithmatt.com. This was hard to accept, as I felt that I was giving up on a project that had barely gotten off the ground. However, after realising that, if done correctly, I wouldn’t have wasted any of the effort, it was easier to accept.

I started making a list of tasks that I estimated would need to be completed to merge the two sites into one. Despite what turned out to be a reasonably long list, I still figured that that wouldn’t take all that much effort to achieve. Oh how wrong I was!

Why? Well matthewsetter.com is built with Hugo (an excellent static site generator - if not the best). Web Dev with Matt, however, is a custom site built with PHP’s Mezzio framework.

What’s so hard? I hear you say. Just extract the styling and HTML from Web Dev with Matt and build a new Hugo theme with it. Done!

Well, that was my first challenge. I’ve long wanted to have a software development project that I could continuously worked on over an extended period of time, something relevant and meaningful to me – even if I wasn’t earning anything from it. I’ve had some over the years, but they were always short-lived.

While it would be quicker to keep using Hugo, and if teaching was my only goal, and I had other projects that I could use to keep my skills up to date, then I would have taken that approach. But I saw this as an opportunity, perhaps incorrectly, to combine both teaching and skills maintenance. What’s more, I’ve loved building Web Dev with Matt in Mezzio, in my most used language of PHP and felt that this was my long-term projecat.

Therefore, if I extracted the theme from Web Dev with Matt into a new Hugo theme and then got rid of the site, I don’t know when I’d find another dev project to work on. So, after a lot of thought, I decided to use the code behind Web Dev with Matt and stop using Hugo (for the time being).

Again, it didn’t seem like there should be much to do. All I thought I’d have to do would be to:

  • Update the site’s name in all the relevant locations and templates;
  • And port the content from matthewsetter.com into it

Sadly, no, there was so much more involved than that. Here’s why.

A lot of the recent content on this site has been written in AsciiDoc. PHP doesn’t have an AsciiDoc parser, that I know of, so that content had to be ported to Markdown; there are other markup languages, but Markdown is the most prolific, used, and supported, and I wrote a small Markdown-based blog module. It was pretty trivial to port the AsciiDoc content to Markdown by using a combination of asciidoctor and pandoc, along with some Bash glue code, which you can see in the following snippet.

for i in *.adoc;
do
    asciidoctor -b docbook $i \
        && pandoc -f docbook -t markdown --wrap=preserve ${i%.*}.xml -o ${i%.*}.md \
        && rm $i;
done

The snippet uses a Glob expression to find all AsciiDoc files (*.adoc). It then iterates over the list and:

  • uses asciidoctor to convert them to DocBook format, as an intermediate step between AsciiDoc and Markdown
  • Uses Pandoc to convert the DocBook representation of the file to Markdown, preserving line wrapping as is
  • Names the new Markdown files the same as the original file, but with a .md file extension
  • Removes the DocBook version of the file

That did the lion’s share of the migration work. However, my small Markdown blog module was pretty strict about what the YAML front-matter of each blog post file could contain. Given that, I had to update the frontmatter keys to match what my blog module expects. Using PhpStorm and some regular expressions in the global find and replace made the task pretty trivial. But then things kept getting trickier, because I learned just how simplistic my blog module and application’s routing table was. I had to update the routing table to allow it to accept all of the characters in a slug that were currently in any of the given articles’ slugs. I had to reorder the routing table, so that fixed routes weren’t overshadowed by any from an article/tutorial. I had to add in support for retrieving articles matching a given tag and category. I had to add in support for retrieving articles related to an article, so that the site could render a related articles list at the bottom of an article. I had to add pagination support, otherwise there would be some 200+ articles listed on one page I had to add missing post images; this isn’t super important, but not having them detracts from the overall presentation and professionalism of the site. I had to fix a poorly implemented “tagged with” implementation. I had to update the site’s styling so that it worked equally well across desktops, tablets, and phones. I changed the code syntax highlighter to Prism.js. THis wasn’t strictly necessary, but sure does a better job than what I had before, for very little effort. Plus a bit more. And that’s just in the code. When it came to deployment, I had even more work to do. Before I dive into that, none of this was a bother to me, as it was exactly what I’d been looking to have an ongoing software development project in the first place; to keep my skills and knowledge active and growing, that way, they wouldn’t atrophy. So while on one hand, merging the two sites into one has taken notably more time than I had expected, I’ve had the opportunity to learn so much in the process. Now, the final aspect of the process was perhaps one of the most rewarding of all, because I got to spend more time on refining and improving the deployment pipeline. The current pipeline is fine, but it could do with some improvements. It started as when I pushed to the GitHub repo’s main branch one of the generated images couldn’t be pushed to DigitalOcean’s Container Registry because, as I found out, the image was too big (and I’d already exceeded my storage quota). How big was the image, you may be asking? 825MB. That’s how big. That seemed completely strange to me, until I started looking into why. To do that, I ran the docker history command on the image and found out that one layer, the one that copied all of the source files into the image was the main culprit. That layer was about 800 MB in size alone. How could that be, I thought? Well, running du -hSc on the project directory told me all I needed to know. The images directory, /public/images, was about 650 MB in size. This opened up some very fun thinking.

  1. Did I have to include the images in the container image?
  2. Were there images which were no longer used, and if so how many?

In answer to question 1, I figured that there was no need to store the images in the container image, as they could just as easily be stored in a public S3 Bucket, or in a private S3 Bucket sitting behind a CloudFront distribution; for example. By removing them from the container image, the container images would be built, pushed, and later downloaded, during development, a lot quicker. What’s more, I’d be saving on storage cosdts and could store a lot more images without hitting the size limit. How would I do it, though? I was not going to manually sync the images to the S3 Bucket. It had to be an automated part of the deployment workflow. I’ve not quite worked this part out yet, but feel that I’m close. So, anyway, here’s a look at what’s comeing, just as soon as I figure out the issues with the deployment pipeline.