Tags: work

111

sparkline

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Service workers and browser extensions

I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…

There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.

And that’s when I remember: browser extensions.

I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).

If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.

This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch clauses of fetch statements in your service worker.

So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.

From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”

I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.

I’m going to make a checklist. Then I’m going to use it …every time.

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Console methods

Whenever I create a fetch event inside a service worker, my code roughly follows the same pattern. There’s a then clause which gets executed if the fetch is successful, and a catch clause in case anything goes wrong:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( fetchError => {
    // Boo! It failed.
});

In my book—Going Offline—I’m at pains to point out that those arguments being passed into each clause are yours to name. In this example I’ve called them fetchResponse and fetchError but you can call them anything you want.

I always do something with the fetchResponse inside the then clause—either I want to return the response or put it in a cache.

But I rarely do anything with fetchError. Because of that, I’ve sometimes made the mistake of leaving it out completely:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( () => {
    // Boo! It failed.
});

Don’t do that. I think there’s some talk of making the error argument optional, but for now, some browsers will get upset if it’s not there.

So always include that argument, whether you call it fetchError or anything else. And seeing as it’s an error, this might be a legitimate case for outputing it to the browser’s console, even in production code.

And yes, you can output to the console from a service worker. Even though a service worker can’t access anything relating to the document object, you can still make use of window.console, known to its friends as console for short.

My muscle memory when it comes to sending something to the console is to use console.log:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.log(fetchError);
});

But in this case, the console.error method is more appropriate:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.error(fetchError);
});

Now when there’s a connectivity problem, anyone with a console window open will see the error displayed bold and red.

If that seems a bit strident to you, there’s always console.warn which will still make the output stand out, but without being quite so alarmist:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.warn(fetchError);
});

That said, in this case, console.error feels like the right choice. After all, it is technically an error.

Altering expectations

Luke has written up the selection process he went through when Clearleft was designing the Virgin Holidays app. When it comes to deploying on mobile, there were three options:

  1. Native apps
  2. A progressive web app
  3. A hybrid app

The Virgin Holidays team went with that third option.

Now, it will come as no surprise that I’m a big fan of the second option: building a progressive web app (or turning an existing site into a progressive web app). I think a progressive web app is a great solution for travel apps, and the use-case that Luke describes sounds perfect:

Easy access to resort staff and holiday details that could be viewed offline to help as many customers as possible travel without stress and enjoy a fantastic holiday

Luke explains why they choice not to go with a progressive web app.

The current level of support and leap in understanding meant we’d risk alienating many of our customers.

The issue of support is one that is largely fixed at this point. When Clearleft was working on the Virgin Holidays app, service workers hadn’t landed in iOS. Hence, the risk of alienating a lot of customers. But now that Mobile Safari has offline capabilities, that’s no longer a problem.

But it’s the second reason that’s trickier:

Simply put, customers already expected to find us in the App Store and are familiar with what apps can historically offer over websites.

I think this is the biggest challenge facing progressive web apps: battling expectations.

For over a decade, people have formed ideas about what to expect from the web and what to expect from native. From a technical perspective, native and web have become closer and closer in capabilities. But people’s expectations move slower than technological changes.

First of all, there’s the whole issue of discovery: will people understand that they can “install” a website and expect it to behave exactly like a native app? This is where install prompts and ambient badging come in. I think ambient badging is the way to go, but it’s still a tricky concept to explain to people.

But there’s another way of looking at the current situation. Instead of seeing people’s expectations as a negative factor, maybe it’s an opportunity. There’s an opportunity right now for companies to be as groundbreaking and trendsetting as Wired.com when it switched to CSS for layout, or The Boston Globe when it launched its responsive site.

It makes for a great story. Just look at the Pinterest progressive web app for an example (skip to the end to get to the numbers):

Weekly active users on mobile web have increased 103 percent year-over-year overall, with a 156 percent increase in Brazil and 312 percent increase in India. On the engagement side, session length increased by 296 percent, the number of Pins seen increased by 401 percent and people were 295 percent more likely to save a Pin to a board. Those are amazing in and of themselves, but the growth front is where things really shined. Logins increased by 370 percent and new signups increased by 843 percent year-over-year. Since we shipped the new experience, mobile web has become the top platform for new signups. And for fun, in less than 6 months since fully shipping, we already have 800 thousand weekly users using our PWA like a native app (from their homescreen).

Now admittedly their previous mobile web experience was a dreadful doorslam, but still, those are some amazing statistics!

Maybe we’re underestimating the malleability of people’s expectations when it comes to the web on mobile. Perhaps the inertia we think we’re battling against isn’t such a problem as long as we give people a fast, reliable, engaging experience.

If you build that, they will come.

CSS grid in Internet Explorer 11

When I was in Boston, speaking on a lunchtime panel with Rachel at An Event Apart, we took some questions from the audience about CSS grid. Inevitably, a question about browser support came up—specifically about support in Internet Explorer 11.

(Technically, you can use CSS grid in IE11—in fact it was the first browser to ship a version of grid—but the prefixed syntax is different to the standard and certain features are missing.)

Rachel gave a great balanced response, saying that you need to look at your site’s stats to determine whether it’s worth the investment of your time trying to make a grid work in IE11.

My response was blunter. I said I just don’t consider IE11 as a browser that supports grid.

Now, that might sound harsh, but what I meant was: you’re already dividing your visitors into browsers that support grid, and browsers that don’t …and you’re giving something to those browsers that don’t support grid. So I’m suggesting that IE11 falls into that category and should receive the layout you’re giving to browsers that don’t support grid …because really, IE11 doesn’t support grid: that’s the whole reason why the syntax is namespaced by -ms.

You could jump through hoops to try to get your grid layout working in IE11, as detailed in a three-part series on CSS Tricks, but at that point, the amount of effort you’re putting in negates the time-saving benefits of using CSS grid in the first place.

Frankly, the whole point of prefixed CSS is that is not used after a reasonable amount of time (originally, the idea was that it would not be used in production, but that didn’t last long). As we’ve moved away from prefixes to flags in browsers, I’m seeing the amount of prefixed properties dropping, and that’s very, very good. I’ve stopped using autoprefixer on new projects, and I’ve been able to remove it from some existing ones—please consider doing the same.

And when it comes to IE11, I’ll continue to categorise it as a browser that doesn’t support CSS grid. That doesn’t mean I’m abandoning users of IE11—far from it. It means I’m giving them the layout that’s appropriate for the browser they’re using.

Remember, websites do not need to look exactly the same in every browser.

Twitter and Instagram progressive web apps

Since support for service workers landed in Mobile Safari on iOS, I’ve been trying a little experiment. Can I replace some of the native apps I use with progressive web apps?

The two major candidates are Twitter and Instagram. I added them to my home screen, and banished the native apps off to a separate screen. I’ve been using both progressive web apps for a few months now, and I have to say, they’re pretty darn great.

There are a few limitations compared to the native apps. On Twitter, if you follow a link from a tweet, it pops open in Safari, which is fine, but when you return to Twitter, it loads anew. This isn’t any fault of Twitter—this is the way that web apps have worked on iOS ever since they introduced their weird web-app-capable meta element. I hope this behaviour will be fixed in a future update.

Also, until we get web notifications on iOS, I need to keep the Twitter native app around if I want to be notified of a direct message (the only notification I allow).

Apart from those two little issues though, Twitter Lite is on par with the native app.

Instagram is also pretty great. It too suffers from some navigation issues. If I click through to someone’s profile, and then return to the main feed, it also loads it anew, losing my place. It would be great if this could be fixed.

For some reason, the Instagram web app doesn’t allow uploading multiple photos …which is weird, because I can upload multiple photos on my own site by adding the multiple attribute to the input type="file" in my posting interface.

Apart from that, though, it works great. And as I never wanted notifications from Instagram anyway, the lack of web notifications doesn’t bother me at all. In fact, because the progressive web app doesn’t keep nagging me about enabling notifications, it’s a more pleasant experience overall.

Something else that was really annoying with the native app was the preponderance of advertisements. It was really getting out of hand.

Well …(looks around to make sure no one is listening)… don’t tell anyone, but the Instagram progressive web app—i.e. the website—doesn’t have any ads at all!

Here’s hoping it stays that way.

The trimCache function in Going Offline

Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.

It’s the trimCache function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName) to a specified amount (maxItems). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:

 function trimCache(cacheName, maxItems) {
   cacheName.open( cache => {
     cache.keys()
     .then( items => {
       if (items.length > maxItems) {
         cache.delete(items[0])
         .then(
           trimCache(cacheName, maxItems)
         ); // end delete then
       } // end if
     }); // end keys then
   }); // end open
 } // end function

See the problem? It’s right there at the start when I try to open the cache like this:

cacheName.open( cache => {

That won’t work. The open method only works on the caches object—I should be passing the name of the cache into the caches.open method. So the code should look like this:

caches.open( cacheName )
.then( cache => {

Everything else remains the same. The corrected trimCache function is here:

function trimCache(cacheName, maxItems) {
  caches.open(cacheName)
  .then( cache => {
    cache.keys()
    .then(items => {
      if (items.length > maxItems) {
        cache.delete(items[0])
        .then(
          trimCache(cacheName, maxItems)
        ); // end delete then
      } // end if
    }); // end keys then
  }); // end open then
} // end function

Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.

You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.

New tools for art direction on the web

I’m in Boston right now, getting ready to speak at An Event Apart. This will be my second (and last) Event Apart of the year—the other time was in Seattle back in April. After that event, I wrote about how inspired I was:

It was interesting to see repeating, overlapping themes. From a purely technical perspective, three technologies that were front and centre were:

  • CSS grid,
  • variable fonts, and
  • service workers.

From listening to other attendees, the overwhelming message received was “These technologies are here—they’ve arrived.”

I was itching to combine those technologies on a project. Coincidentally, it was around that time that I started planning to publish The Gęsiówka Story. I figured I could use that as an opportunity to tinker with those front-end technologies that I was so excited about.

But I was cautious. I didn’t want to use the latest exciting technology just for the sake of it. I was very aware of the gravity of the material I was dealing with. Documenting the story of Gęsiówka was what mattered. Any front-end technologies I used had to be in support of that.

First of all, there was the typesetting. I don’t know about you, but I find choosing the right typefaces to be overwhelming. Despite all the great tips and techniques out there for choosing and pairing typefaces, I still find myself agonising over the choice—what if there’s a better choice that I’m missing?

In this case, because I wanted to use a variable font, I had a constraint that helped reduce the possibility space. I started to comb through v-fonts.com to find a suitable typeface—I was fairly sure I wanted a serious serif.

I had one other constraint. The font file had to include English, Polish, and German glyphs. That pretty much sealed the deal for Source Serif. That only has one variable axis—weight—but I decided that this could also be an interesting constraint: how much could I wrangle out of a single typeface just using various weights?

I ended up using font weights of 75, 250, 315, 325, 340, 350, 400, and 525. Most of them were for headings or one-off uses, with a font-weight of 315 for the body copy.

(And can I just say once again how impressed I am that the founding fathers of CSS were far-sighted enough to keep those font weight ranges free for future use?)

Getting the typography right posed an interesting challenge. This was a fairly long piece of writing, so it really needed to be readable without getting tiring. But at the same time, I didn’t want it to be exactly pleasant to read—that wouldn’t do the subject matter justice. I wanted the reader to feel the seriousness of the story they were reading, without being fatigued by its weight.

Colour and type went a long way to communicating that feeling. The grid sealed the deal.

The Gęsiówka Story is mostly one single column of text, so on the face of it, there isn’t much opportunity to go crazy with CSS Grid. But I realised I could use a grid to create a winding effect for the text. I had to be careful though: I didn’t want it to become uncomfortable to read. I wanted to create a slightly unsettling effect.

Every section element is turned into a seven-column grid container:

section {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

The first and last columns are the same width as the gutters (2em), effectively creating “outer” gutters for the grid. Each paragraph within the section takes up six of the seven columns. I use nth-of-type to alternate which six columns are used (the first six or the last six). That creates the staggered indendation:

section > p {
    grid-column: 1/7;
}
section > p:nth-of-type(even) {
    grid-column: 2/8;
}

Staggered grid.

That might seem like overkill just to indent every second paragraph by 4em, but I then used the same grid dimensions to layout figure elements with images and captions.

section > figure {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

Then I can lay out differently proportioned images across different ranges of the grid:

section > figure.landscape > img {
    grid-column: 1/5;
}
section > figure.landscape > figcaption {
    grid-column: 5/8;
}
section > figure.portrait > img {
    grid-column: 1/4;
}
section > figure.portrait > figcaption {
    grid-column: 4/8;
}

Because they’re positioned on the same grid as the paragraphs, everything lines up nicely (and yes, if subgrid existed, I wouldn’t have to redeclare the grid dimensions for the figures).

Finally, I wanted to make sure that the whole thing could be read offline. After all, once you’ve visited the URL once, there’s really no reason to make any more requests to the server. Static documents—and books—are the perfect candidates for an “offline first” approach: always look in the cache, and only go to the network as a last resort.

In this case I used a variation of my minimal viable service worker script, and the result is a very short set of instructions. There’s a little bit of pre-caching going on: I grab the variable font and the HTML page itself (which includes the CSS inlined).

So there you have it: variable fonts, CSS grid, and service workers: three exciting front-end technologies, all of which can be applied as progressive enhancements on top of the core content.

Once again, I find that it’s personal projects that offer the most opportunities to try out new or interesting techniques. And The Gęsiówka Story is a very personal project indeed.

Praise for Going Offline

I’m very, very happy to see that my new book Going Offline is proving to be accessible and unintimidating to a wide audience—that was very much my goal when writing it.

People have been saying nice things on their blogs, which is very gratifying. It’s even more gratifying to see people use the knowledge gained from reading the book to turn those blogs into progressive web apps!

Sara Soueidan:

It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.

Eric Lawrence:

I was delighted to discover a straightforward, very approachable reference on designing a ServiceWorker-backed application: Going Offline by Jeremy Keith. The book is short (I’m busy), direct (“Here’s a problem, here’s how to solve it“), opinionated in the best way (landmine-avoiding “Do this“), and humorous without being confusing. As anyone who has received unsolicited (or solicited) feedback from me about their book knows, I’m an extremely picky reader, and I have no significant complaints on this one. Highly recommended.

Ben Nadel:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic.

Daniel Koskine:

Jeremy nails it again with this beginner-friendly introduction to Service Workers and Progressive Web Apps.

Donny Truong

Jeremy’s technical writing is as superb as always. Similar to his first book for A Book Apart, which cleared up all my confusions about HTML5, Going Offline helps me put the pieces of the service workers’ puzzle together.

People have been saying nice things on Twitter too…

Aaron Gustafson:

It’s a fantastic read and a simple primer for getting Service Workers up and running on your site.

Ethan Marcotte:

Of course, if you’re looking to take your website offline, you should read @adactio’s wonderful book

Lívia De Paula Labate:

Ok, I’m done reading @adactio’s Going Offline book and as my wife would say, it’s the bomb dot com.

If that all sounds good to you, get yourself a copy of Going Offline in paperbook, or ebook (or both).

Detecting image requests in service workers

In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog or /articles differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.

One of the ways to check what kind of request you’re dealing with is to see what’s in the accept header. Here’s how I show the test for HTML pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Handle your page requests here.
}

So, logically enough, I show the same technique for detecting image requests:

if (request.headers.get('Accept').includes('image')) {
    // Handle your image requests here.
}

That should catch any files that have image in the request’s accept header, like image/png or image/jpeg or image/svg+xml and so on.

But there’s a problem. Both Safari and Firefox now use a much broader accept header: */*

My if statement evaluates to false in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
    // Handle your image requests here.
}

So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:

if (request.headers.get('Accept').includes('image'))

Swap it out for:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/))

And feel to add any other image file extensions (like webp) in there too.

Registering service workers

In chapter two of Going Offline, I talk about registering your service worker wrapped up in some feature detection:

<script>
if (navigator.serviceworker) {
  navigator.serviceworker.register('/serviceworker.js');
}
</script>

But I also make reference to a declarative way of doing this that isn’t very widely supported:

<link rel="serviceworker" href="/serviceworker.js">

No need for feature detection there. Thanks to the liberal error-handling model of HTML (and CSS), browsers will just ignore what they don’t understand, which isn’t the case with JavaScript.

Alas, it looks like that nice declarative alternative isn’t going to be making its way into browsers anytime soon. It has been removed from the HTML spec. That’s a shame. I have a preference for declarative solutions where possible—they’re certainly easier to teach. But in this case, the JavaScript alternative isn’t too onerous.

So if you’re reading Going Offline, when you get to the bit about someday using the rel value, you can cast a wistful gaze into the distance, or shed a tiny tear for what might have been …and then put it out of your mind and carry on reading.

Clearleft.com is a progressive web app

What’s that old saying? The cobbler’s children have no shoes that work offline. Or something.

It’s been over a year since the Clearleft site relaunched and I listed some of the next steps I had planned:

Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step.

You know how it is. Those no-brainer tasks are exactly the kind of thing that end up on a to-do list without ever quite getting to-done. Meanwhile I’ve been writing and speaking about how any website can be a progressive web app. I think Alanis Morissette used to sing about this sort of situation.

Enough is enough! Clearleft.com is now a progressive web app. It has a manifest file and a service worker script.

The service worker logic is fairly straightforward, and taken almost verbatim from Going Offline. As you navigate around the site, the service worker applies different logic depending on the kind of file you’re requesting:

  • Pages are served fresh from the network, falling back to the cache when there’s a problem.
  • Everything else is served from the cache where possible, resorting to the network only if there’s no match in the cache—quite the performance boost!

In both cases, if a page or a file is retrieved from the network, it’s gets put into a cache. I’ve got one cache for pages, and another for everything else. And even if a file is retrieved from that cache, I still fire off a fetch request to grab a fresh copy for the cache. So while there’s a chance that a stale file might be served up, it will only ever be slightly stale, and the next time it’s requested, it’ll be fresh.

In the worst-case scenario, when a page can’t be retrieved from the network or the cache, you end up seeing a custom offline page. There you can see a list of any pages that are cached (meaning you can revisit them even without an internet connection).

A custom offline page showing a list of URLs.

It’s not ideal—page titles would be friendlier than URLs—but it’s a start. I’m sure I’ll revisit it soon. Honest.

Oh, and after a year of procrastinating about doing this, guess how long it took? About half a day. Admittedly, this isn’t my first progressive web app, and the more you build ‘em, the easier it gets. Still, it’s a classic example of a small investment of time leading to a big improvement in performance and user experience.

If you think your company’s website could benefit from being a progressive web app (and believe me, it definitely could), you have a couple of options:

  1. Arm yourself with a copy of Going Offline and give it a go yourself. Or
  2. Get in touch with Clearleft. We can help you. (See, I can say that with a straight face now that we’re practicing what we preach.)

Either way, don’t dilly dally …like I did.

Expectations

I noticed something interesting recently about how I browse the web.

It used to be that I would notice if a site were responsive. Or, before responsive web design was a thing, I would notice if a site was built with a fluid layout. It was worthy of remark, because it was exceptional—the default was fixed-width layouts.

But now, that has flipped completely around. Now I notice if a site isn’t responsive. It feels …broken. It’s like coming across an embedded map that isn’t a slippy map. My expectations have reversed.

That’s kind of amazing. If you had told me ten years ago that liquid layouts and media queries would become standard practice on the web, I would’ve found it very hard to believe. I spent the first decade of this century ranting in the wilderness about how the web was a flexible medium, but I felt like the laughable guy on the street corner with an apocalyptic sandwich board. Well, who’s laughing now

Anyway, I think it’s worth stepping back every now and then and taking stock of how far we’ve come. Mind you, in terms of web performance, the trend has unfortunately been in the wrong direction—big, bloated websites have become the norm. We need to change that.

Now, maybe it’s because I’ve been somewhat obsessed with service workers lately, but I’ve started to notice my expectations around offline behaviour changing recently too. It’s not that I’m surprised when I can’t revisit an article without an internet connection, but I do feel disappointed—like an opportunity has been missed.

I really notice it when I come across little self-contained browser-based games like

Those games are great! I particularly love Battleship Solitaire—it has a zen-like addictive quality to it. If I load it up in a browser tab, I can then safely go offline because the whole game is delivered in the initial download. But if I try to navigate to the game while I’m offline, I’m out of luck. That’s a shame. This snack-sized casual games feel like the perfect use-case for working offline (or, even if there is an internet connection, they could still be speedily served up from a cache).

I know that my expectations about offline behaviour aren’t shared by most people. The idea of visiting a site even when there’s no internet connection doesn’t feel normal …yet.

But perhaps that expectation will change. It’s happened before.

(And if you want to be ready when those expectations change, I’ve written a Going Offline for you.)

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.

HTTPS + service worker + web app manifest = progressive web app

I gave a quick talk at the Delta V conference in London last week called Any Site can be a Progressive Web App. I had ten minutes, but frankly I only needed enough time to say the title of the talk because, well, that was also the message.

There’s a common misconception that making a Progressive Web App means creating a Single Page App with an app-shell architecture. But the truth is that literally any website can benefit from the performance boost that results from the combination of HTTPS + Service Worker + Web App Manifest.

See how I define a progressive web app as being HTTPS + service worker + web app manifest? I’ve been doing that for a while. Here’s a post from last year called Progressing the web:

Literally any website can be a progressive web app:

That last step can be tricky if you’re new to service workers, but it’s not unsurmountable. It’s certainly a lot easier than completely rearchitecting your existing website to be a JavaScript-driven single page app.

Later I wrote a post called What is a Progressive Web App? where I compared the definition to responsive web design.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

But here’s the thing. Outside of the confines of my own website, it’s hard to find that definition anywhere.

On Google’s developer site, their definition uses adjectives like “reliable”, “fast”, and “engaging”. Those are all great adjectives, but more useful to a salesperson than a developer.

Over on the Mozilla Developer Network, their section on progressive web apps states:

Progressive web apps use modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications. These apps work everywhere and provide several features that give them the same user experience advantages as native apps. 

Hmm …I’m not so sure about that comparison to native apps (and I’m a little disturbed that the URL structure is /Apps/Progressive). So let’s click through to the introduction:

PWAs are web apps developed using a number of specific technologies and standard patterns to allow them to take advantage of both web and native app features.

Okay. Specific technologies. That’s good to hear. But instead of then listing those specific technologies, we’re given another list of adjectives (discoverable, installable, linkable, etc.). Again, like Google’s chosen adjectives, they’re very nice and desirable, but not exactly useful to someone who wants to get started making a progressive web app. That’s why I like to cut to the chase and say:

  • You need to be running on HTTPS,
  • Then you can add a service worker,
  • And don’t forget to add a web app manifest file.

If you do that, you’ve got a progressive web app. Now, to be fair, there’a lot that I’m leaving out. Your site should be fast. Your site should be responsive (it is, after all, on the web). There’s not much point mucking about with service workers if you haven’t sorted out the basics first. But those three things—HTTPS + service worker + web app manifest—are specifically what distinguishes a progressive web app. You can—and should—have a reliable, fast, engaging website before turning it into a progressive web app.

Jason has been thinking about progressive web apps a lot lately (he should write a book or something), and he said to me:

I agree with you on the three things that comprise a PWA, but as far as I can tell, you’re the first to declare it as such.

I was quite surprised by that. I always assumed that I was repeating the three ingredients of a progressive web app, not defining them. But looking through all the docs out there, Jason might be right. It’s surprising because I assumed it was obvious why those three things comprise a progressive web app—it’s because they’re testable.

Lighthouse, PWA Builder, Sonarwhal and other tools that evaluate your site will measure its progressive web app score based on the three defining factors (HTTPS, service worker, web app manifest). Then there’s Android’s Add to Home Screen prompt. Here finally we get a concrete description of what your site needs to do to pass muster:

  • Includes a web app manifest…
  • Served over HTTPS (required for service workers)
  • Has registered a service worker with a fetch event handler

(Although, as of this month, Chrome will no longer show the prompt automatically—you also have to write some JavaScript to handle the beforeinstallprompt  event).

Anyway, if you’re looking to turn your website into a progressive web app, here’s what you need to do (assuming it’s already performant and responsive):

  1. Switch over to HTTPS. Certbot can help you here.
  2. Add a web app manifest.
  3. Add a service worker to your site so that it responds even when there’s no network connection.

That last step might sound like an intimidating prospect, but help is at hand: I wrote Going Offline for exactly this situation.

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Service worker resources

At the end of my new book, Going Offline, I have a little collection of resources relating to service workers. Here’s how I introduce them:

If this book were a podcast, then this would be the point at which I would be imploring you to rate me on iTunes (or I’d be telling you about a really good mattress). Instead, I’d like to give you some hyperlinks so that you can explore some of the topics in this brief book in more detail.

It always feels a little strange to publish a list of hyperlinks in a physical book, so I figured I’d republish them here for easy access…

Explanations

Guides

Examples

Progressive web apps

Tools

Documentation