Jeremy Keith

Jeremy Keith

Making websites. Writing books. Speaking at conferences. Living in Brighton. Working at Clearleft. Playing music. Taking photos. Answering email.

Journal 2386 sparkline Links 6612 sparkline Articles 66 sparkline Notes 3120 sparkline

Sunday, May 28th, 2017

Checked in at Kapnikarea cafe. I hear a bouzouki. map

Checked in at Kapnikarea cafe. I hear a bouzouki.

Saturday, May 27th, 2017

Friday, May 26th, 2017


Checked in at Ατλαντικός. Late night seafood feast — with Jessica performance notes

I love, love, *love,—partly because it’s so useful, but also because it’s so fast. I know public transport is the clichéd use-case when it comes to talking about web performance, but in this case it’s genuine: I use the site on trains and in airports.

Matthew gives a blow-by-blow account of the performance optimisations he’s made for the site, including a service worker. The whole thing is a masterclass in performance and progressive enhancement. I’m so glad he took the time to share this!

Checked in at Το Μπακαλομάγαζο. Having a late lunch. map

Checked in at Το Μπακαλομάγαζο. Having a late lunch.

Thursday, May 25th, 2017

Checked in at Η Παράνγκα του Σωτήρη. Let the eating begin! 🇬🇷 map

Checked in at Η Παράνγκα του Σωτήρη. Let the eating begin! 🇬🇷

Wednesday, May 24th, 2017

The Apophenic Machine — Real Life

To navigate the web is to beat a path through a labyrinth of links left by others, and to thereby create associative links yourself, unspooling them like a guiding thread onto a floor already carpeted with such connections. Each thread of connection is unique, individualized: everyone draws their own map of the network as they navigate it.

A workshop on evaluating technology

After hacking away at Indie Web Camp Düsseldorf, I stuck around for Beyond Tellerrand. I ended up giving a talk, stepping in for Ellen. I was a poor substitute, but I hope I entertained the lovely audience for 45 minutes.

After Beyond Tellerrand, I got on a train to Nuremberg …along with a dozen of my peers who were also at the event.

All aboard the Indie Web Train from Düsseldorf to Nürnberg. Indie Web Train.

I arrived right in the middle of Web Week Nürnberg. Among the many events going on was a workshop that Joschi arranged for me to run called Evaluating Technology. The workshop version of my Beyond Tellerrand talk, basically.

This was an evolution of a workshop I ran a while back. I have to admit, I was a bit nervous going into this. I had no tangible material prepared; no slides, no handouts, nothing. Instead the workshop is a collaborative affair. In order for it to work, the attendees needed to jump in and co-create it with me. Luckily for me, I had a fantastic and enthusiastic group of people at my workshop.

Evaluating Technology

We began with a complete braindump. “Name some tools and technologies,” I said. “Just shout ‘em out.” Shout ‘em out, they did. I struggled to keep up just writing down everything they said. This was great!

Evaluating Technology

The next step was supposed to be dot-voting on which technologies to cover, but there were so many of them, we introduced an intermediate step: grouping the technologies together.

Evaluating Technology Evaluating Technology

Once the technologies were grouped into categories like build tools, browser APIs, methodologies etc., we voted on which categories to cover, only then diving deeper into specific technologies.

I proposed a number of questions to ask of each technology we covered. First of all, who benefits from the technology? Is it a tool for designers and developers, or is it a tool for the end user? Build tools, task runners, version control systems, text editors, transpilers, and pattern libraries all fall into the first category—they make life easier for the people making websites. Browser features generally fall into the second category—they improve the experience for the end user.

Looking at user-facing technologies, we asked: how well do they fail? In other words, can you add this technology as an extra layer of enhancement on top of what you’re building or do you have to make it a foundational layer that’s potentially a single point of failure?

For both classes of technologies, we asked the question: what are the assumptions? What fundamental philosophy has been baked into the technology?

Evaluating Technology Evaluating Technology

Now, the point of this workshop is not for me to answer those questions. I have a limited range of experience with the huge amount of web technologies out there. But collectively all of us attending the workshop will have a good range of experience and knowledge.

Interesting then that the technologies people voted for were:

  • service workers,
  • progressive web apps,
  • AMP,
  • web components,
  • pattern libraries and design systems.

Those are topics I actually do have some experience with. Lots of the attendees had heard of these things, they were really interested in finding out more about them, but they hadn’t necessarily used them yet.

And so I ended up doing a lot of the talking …which wasn’t the plan at all! That was just the way things worked out. I was more than happy to share my opinions on those topics, but it was of a shame that I ended up monopolising the discussion. I felt for everyone having to listen to me ramble on.

Still, by the end of the day we had covered quite a few topics. Better yet, we had a good framework for categorising and evaluating web technologies. The specific technologies we covered were interesting enough, but the general approach provided the lasting value.

All in all, a great day with a great group of people.

Evaluating Technology

I’m already looking forward to running this workshop again. If you think it would be valuable for your company, get in touch.

Checked in at trading post coffee roasters. Starting the day right. map

Checked in at trading post coffee roasters. Starting the day right.

Tuesday, May 23rd, 2017

Went outside to watch the ISS fly over—it was by far the brightest object in the sky.

You’ve got a beautiful ship, @AstroPeggy.

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];'pages')
.then( cache => {
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to"pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.