Tom Bates is a designer and developer, based in London, with a hunger to build things.

Taking an idea and turning it into something is what I do best. This year, I'm focused on designing frictionless developer experiences, building thoughtful interfaces, and learning to be a better writer.

Tracking page views with Fathom and Next.js

Everyone likes to see metrics, even if it's just vanity. It might not be real validation of my writing, but it's nice to know at least people are visiting my tiny corner of the web. After reading, Damian Bradfield's book, The Trust Manifesto, about big data and privacy it got me thinking. How much do I need to track and know about my audience?

I knew I was going to need a solution to help me track visitors but which one? My search began with the usual suspects such as Google, Heap, and Mixpanel. They're all great products, but they follow you around a lot. I thought all was lost until I came across FathomFathom is a simple and private website analytics platform. I started reading a few of their posts and found their post describing how they handle anonymisation particularly interesting. Their views on privacy, coupled with the fact that they are a small company, and I was sold.

At first, setting up Fathom was as simple as adding their snippet. However, once I pushed my site live, I noticed page views weren't getting tracked accurately. Server renders got logged correctly, but page changes on the client were missing (unless no-one was browsing more than one page). The problem was pretty obvious, Fathom's snippet didn't account for single-page applications, and the solution on their site didn't seem to work with Next.js.

The solution was pretty simple, use Fathom's snippet to log all server renders from _document.js, and track all subsequent clientside route changes in _app.js via Next.js's router events.

// _document.js
import Document, {
} from "next/document";

const __html = `(function(f, a, t, h, o, m){
  o.async=1; o.src=t;'fathom-script';    
  })(document, window, '', 'fathom');
  fathom('set', 'siteId', '<YOUR_TRACKING_CODE>');  

export default class MyDocument extends Document {
  render() {
    return (
      <Html lang="en">
        <Head />
          <Main />
          <NextScript />
          <script dangerouslySetInnerHTML={{ __html }} />        

// _app.js
import Router from 'next/router';

const trackPageView = () => {
  if (typeof window !== "undefined" && typeof window.fathom !== "undefined") {
};"routeChangeComplete", trackPageView);

export default ({ Component, pageProps }) => {
  return <Component {...pageProps} />

And that's it.

Adding individual pages with SWR

As I add more content to the site, I'd like to be able to share direct links to posts. It's a pretty simple change to add individual changelog and blog pages. To start, I first had to decide on the URL structure, which had a natural choice, following best practice. I ended up with the following:{resource}/{slug}. With Next.js, all it takes to create dynamic routing is to create a new file inside the pages directory. For my changelog articles, the path looks like pages/changelog/[slug].tsx.

Then it's just a case of hooking up the data. Below is the skeleton for the page minus the details.

import React from "react";

const fetchData = async slug => {
  return Promise.resolve({ title: slug });

const ChangelogPage = ({ article }) => {
   if (!article) {
    return (<div>Article not found!</div>);

  return (<article>{article.title}</article>);

ChangelogPage.getInitialProps = async ({ query }) => {
  const article = await fetchData(query.slug);
  return {

export default ChangelogPage;

Once I had the primary setup for the page in place, I wanted to look at using a brilliant library from Zeit called SWR, even if it's a little bit of overengineering. SWR is a React Hooks library for remote data fetching. It has a bunch of exciting features, including caching and fast page navigation. However, the feature I was mostly interested in was Revalidation on focus, which would allow me to edit my content on Contentful and instantly see updates when returning to my site.

Getting SWR setup is a piece of cake. All you need is one function call and to hook up the returned data.

import React from "react";
import useSWR from "swr";

const fetchData = async slug => {
  return Promise.resolve({ title: slug });

const ChangelogPage = ({ article, slug }) => {
  const { data } = useSWR(slug, fetchData, {
    initialData: article

  if (!data) {
    return (<div>Article not found!</div>);

  return (<article>{data.title}</article>);

ChangelogPage.getInitialProps = async ({ query }) => {
  const article = await fetchData(query.slug);

  return {
    slug: query.slug

export default ChangelogPage;

And that's it. We're all setup with individual pages with almost live updates using SWR. You can find more examples of SWR here.

Searching for associative trails

"Consider a future device … in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory." – Vannevar Bush

I first read about Vannevar Bush's Memex almost ten years ago. I was shocked. With all the advances in personal computing, how could an idea from over 50 years ago still be so relevant and seem so unsolved? Flash forward to today, and realistically, it's at least partially solved or perhaps made less relevant today by the vast number of products we've created. Just in the last decade or so, we've seen the rise of tools like Dropbox, Elastic, Notion, and Airtable. All of which augment our ability to store, search, distribute, and genuinely understand information.

We've developed an abundance of ways to accumulate, organise, share, and search through millions and millions of pieces of information in seconds. If you need an answer to a question, you can quickly ask one of your connected devices, and within seconds have your question answered. I've wondered on occasion, how the world we live in today would've changed Vannevar Bush's ideas on personal knowledge devices?

Although we have more information than ever at our fingertips, it's not personal to us, it's not our own thoughts we're combing through. Everyone has their own ideas, beliefs, opinions, processes, and more importantly, ways in which they connect them all together. Vannevar Bush's Memex described a tool that allowed somebody to pack all their books, records and discussions into one device that can be quickly and flexibly consumed. Sound familiar? It's safe to say that we've built our fair share of both hardware and software to help with this, but that wasn't the part of Memex that really captivated me all these years ago.

The part that really captured my attention described the ability to create connections between individual pieces of information within a more extensive system. Bush described these connections as associative trails, with each trail attaching the necessary context to promote the understanding of an individual's thought process, mimicking the human brain's own ability to create mental associations.

I've always struggled with how to describe my process to others and at times, myself. How did I get from point A to point B via point X? I don't want to pretend that my own flawed communication skills don't play a part, but I feel like there is more to it. I frequently find it challenging to explain my disordered non-linear process more coherently and linearly. It feels like trying to jam a square peg into a round hole. That's why the idea Memex was so appealing to me. 

Just imagine it, a piece of software that allows you to follow an individual's thought process. Not only seeing they went from A to B, but all the little detours. Thinking about having this superpower at my disposal, I was hooked. If I had this tool, I could not only understand my own thought process instantly but explain it to others.

I've set out on an adventure to build my own personal Memex countless times. Unfortunately, I'm still Memex-less, waiting for someone to release my mythical dream product. It's okay though, I'm patient, I can wait. Until then, I've developed ways to keep my mess of connected thoughts in check.

Building the foundations

When you're building something, you want to start with strong foundations, but in this case, it wasn't my focus. I wasn't so focused on building time proof foundations for a personal website. Instead, I focused on creating something easy to use with little overhead.

Initially, I wanted to use little to no javascript because a simple blog really doesn't need it. I thought about using a framework like JekyllMiddleman, or 11ty, something bare-bones that outputs a few files that I can serve from anywhere. However, in the end, I opted for my usual go-to stack of Next.jsTypeScript, and Now. I like to avoid the overengineering that usually comes with frontend development (if you're not careful) and Next.js makes that rediculously easy.

I'm not losing any of the speed statically serving gives you because Next.js allows me to render a React application directly from the server. No loading indicators or waiting for scripts to execute. For the most part, it's the same experience for the person reading, with or without javascript. There's a lot of other great things about Next.js including fantastic documentation and heaps and heaps of examples, but I'm not going to list them all here.

So, once I had the base of the foundations, the next part for figuring out the content process. To start, I wanted to make it as simple as possible and set out to use MDX (React flavoured Markdown), allowing me to create a simple Markdown file for each of my blog posts. However, this meant that each time I wanted to post a new article, I would need to add it to the repo on Github. Not ideal if I wanted to be able to post on the fly. I was going to need something a little more dynamic.

Enter Contentful, an API-first Content Management System. Contentful allows me to create all my content in one place and fetch it from my site regardless of the technology used. There are a few similar products available, one being Prismic. I honestly didn't spend any time evaluating the best product to use and chose the one that was more or less familiar to me.

With all the decisions made, I was able to pull together the initial version of the site in under a day and have my first two blog posts live. I used a couple of other interesting libraries and learnt a few things along the way, but I'd like to write about them technically on their own.

Building in the open

At the end of every new year, we get bombarded with "2019: My Year in Review" type articles. Work is winding down before the holiday period, and we're able to find the time to reflect on the year passed and make plans for the year to come. Seeing others reflecting on their year encourages us to look more closely at our growth and achievements. I'm not going to write one a yearly review, although I have been doing a lot of reflection over the last few weeks. Instead, I'd like to explain a little about why I've rebuilt my website.

Towards the end of 2019, I began to realise that I enjoy writing a lot, even though I'm not the best writer. When I'm writing, I feel focused and driven. If I'm struggling to get to grips with a topic, merely writing about it gives me a deeper understanding of it. It's a great feeling, and so one of my goals for 2020 is to write more, share more, and generally be a little louder.

I've thrown together this very bare-bones site. It doesn't support too much, not even many headings. As the needs arise, I'm going to build in more features and write about them from both a technical and design viewpoints. The idea is to learn how to communicate more efficiently, write better, and improve my design and development skills. I haven't had a blog in a long, long time, so bear with me while I get to grips with it again.

Now, I just need to build the habit of writing more.