Tags: caching

51

sparkline

Tuesday, July 2nd, 2019

The trimCache function in Going Offline …again

It seems that some code that I wrote in Going Offline is haunted. It’s the trimCache function.

First, there was the issue of a typo. Or maybe it’s more of a brainfart than a typo, but either way, there’s a mistake in the syntax that was published in the book.

Now it turns out that there’s also a problem with my logic.

To recap, this is a function that takes two arguments: the name of a cache, and the maximum number of items that cache should hold.

function trimCache(cacheName, maxItems) {

First, we open up the cache:

caches.open(cacheName)
.then( cache => {

Then, we get the items (keys) in that cache:

cache.keys()
.then(keys => {

Now we compare the number of items (keys.length) to the maximum number of items allowed:

if (keys.length > maxItems) {

If there are too many items, delete the first item in the cache—that should be the oldest item:

cache.delete(keys[0])

And then run the function again:

.then(
    trimCache(cacheName, maxItems)
);

A-ha! See the problem?

Neither did I.

It turns out that, even though I’m using then, the function will be invoked immediately, instead of waiting until the first item has been deleted.

Trys helped me understand what was going on by making a useful analogy. You know when you use setTimeout, you can’t put a function—complete with parentheses—as the first argument?

window.setTimeout(doSomething(someValue), 1000);

In that example, doSomething(someValue) will be invoked immediately—not after 1000 milliseconds. Instead, you need to create an anonymous function like this:

window.setTimeout( function() {
    doSomething(someValue)
}, 1000);

Well, it’s the same in my trimCache function. Instead of this:

cache.delete(keys[0])
.then(
    trimCache(cacheName, maxItems)
);

I need to do this:

cache.delete(keys[0])
.then( function() {
    trimCache(cacheName, maxItems)
});

Or, if you prefer the more modern arrow function syntax:

cache.delete(keys[0])
.then( () => {
    trimCache(cacheName, maxItems)
});

Either way, I have to wrap the recursive function call in an anonymous function.

Here’s a gist with the updated trimCache function.

What’s annoying is that this mistake wasn’t throwing an error. Instead, it was causing a performance problem. I’m using this pattern right here on my own site, and whenever my cache of pages or images gets too big, the trimCaches function would get called …and then wouldn’t stop running.

I’m very glad that—witht the help of Trys at last week’s Homebrew Website Club Brighton—I was finally able to get to the bottom of this. If you’re using the trimCache function in your service worker, please update the code accordingly.

Management regrets the error.

Monday, June 24th, 2019

Am I cached or not?

When I was writing about the lie-fi strategy I’ve added to adactio.com, I finished with this thought:

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.”

Trys heard my plea, and came up with a very clever technique to alter the HTML of a page when it’s put into a cache.

It’s a function that reads the response body stream in, returning a new stream. Whilst reading the stream, it searches for the character codes that make up: <html. If it finds them, it tacks on a data-cached attribute.

Nice!

But then I was discussing this issue with Tantek and Aaron late one night after Indie Web Camp Düsseldorf. I realised that I might have another potential solution that doesn’t involve the service worker at all.

Caveat: this will only work for pages that have some kind of server-side generation. This won’t work for static sites.

In my case, pages are generated by PHP. I’m not doing a database lookup every time you request a page—I’ve got a server-side cache of posts, for example—but there is a little bit of assembly done for every request: get the header from here; get the main content from over there; get the footer; put them all together into a single page and serve that up.

This means I can add a timestamp to the page (using PHP). I can mark the moment that it was served up. Then I can use JavaScript on the client side to compare that timestamp to the current time.

I’ve published the code as a gist.

In a script element on each page, I have this bit of coducken:

var serverTimestamp = <?php echo time(); ?>;

Now the JavaScript variable serverTimestamp holds the timestamp that the page was generated. When the page is put in the cache, this won’t change. This number should be the number of seconds since January 1st, 1970 in the UTC timezone (that’s what my server’s timezone is set to).

Starting with JavaScript’s Date object, I use a caravan of methods like toUTCString() and getTime() to end up with a variable called clientTimestamp. This will give the current number of seconds since January 1st, 1970, regardless of whether the page is coming from the server or from the cache.

var localDate = new Date();
var localUTCString = localDate.toUTCString();
var UTCDate = new Date(localUTCString);
var clientTimestamp = UTCDate.getTime() / 1000;

Then I compare the two and see if there’s a discrepency greater than five minutes:

if (clientTimestamp - serverTimestamp > (60 * 5))

If there is, then I inject some markup into the page, telling the reader that this page might be stale:

document.querySelector('main').insertAdjacentHTML('afterbegin',`
  <p class="feedback">
    <button onclick="this.parentNode.remove()">dismiss</button>
    This page might be out of date. You can try <a href="javascript:window.location=window.location.href">refreshing</a>.
  </p>
`);

The reader has the option to refresh the page or dismiss the message.

This page might be out of date. You can try refreshing.

It’s not foolproof by any means. If the visitor’s computer has their clock set weirdly, then the comparison might return a false positive every time. Still, I thought that using UTC might be a safer bet.

All in all, I think this is a pretty good method for detecting if a page is being served from a cache. Remember, the goal here is not to determine if the user is offline—for that, there’s navigator.onLine.

The upshot is this: if you visit my site with a crappy internet connection (lie-fi), then after three seconds you may be served with a cached version of the page you’re requesting (if you visited that page previously). If that happens, you’ll now also be presented with a little message telling you that the page isn’t fresh. Then it’s up to you whether you want to have another go.

I like the way that this puts control back into the hands of the user.

Monday, June 3rd, 2019

Self-Host Your Static Assets – CSS Wizardry

Trust no one! Harry enumerates the reason why you should be self-hosting your assets (and busts some myths along the way).

There really is very little reason to leave your static assets on anyone else’s infrastructure. The perceived benefits are often a myth, and even if they weren’t, the trade-offs simply aren’t worth it. Loading assets from multiple origins is demonstrably slower.

Thursday, May 9th, 2019

Distinguishing cached vs. network HTML requests in a Service Worker | Trys Mudford

Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.

Friday, March 8th, 2019

Tuning Performance for New and “Old” Friends | Filament Group, Inc., Boston, MA

This is a really clever technique from Scott that he unveiled at An Event Apart in Seattle. It uses a header sent by a service worker to distinguish between returning and new visitors—much neater than relying on a cookie. I’ve updated my service worker on The Session to use this technique now.

Tuesday, March 5th, 2019

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.

Monday, March 4th, 2019

Cache-Control for Civilians – CSS Wizardry

Harry breaks down cache-control headers into steps that even I can understand. I’ll be using this a reference for sure.

Thursday, December 6th, 2018

Introducing Background Fetch  |  Web  |  Google Developers

I’m going to have to read through this article by Jake a few times before I begin to wrap my head around this background fetch thing, but it looks like it would be perfect for something like the dConstruct Audio Archive, where fairly large files can be saved for offline listening.

Saturday, November 24th, 2018

PushAPI without Notifications | Seblog

Remember when I wrote about using push without notifications? Sebastiaan has written up the details of the experiment he conducted at Indie Web Camp Berlin.

Tuesday, November 20th, 2018

Push and ye shall receive | CSS-Tricks

Imagine a PWA podcast app that works offline and silently receives and caches new podcasts. Sweet. Now we need a permissions model that allows for silent notifications.

Tuesday, November 13th, 2018

Inlining or Caching? Both Please! | Filament Group, Inc., Boston, MA

This just blew my mind! A fiendishly clever pattern that allows you to inline resources (like critical CSS) and cache that same content for later retrieval by a service worker.

Crazy clever!

Sunday, November 11th, 2018

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Friday, September 28th, 2018

Thoughts on Offline-first | Trys Mudford

Service Workers have such huge potential power, and I feel like we (developers on the web) have barely scratched the surface with what’s possible.

Needless to say, I couldn’t agree more!

Trys is thinking through some of the implicatons of service workers, like how we refresh stale content, and how we deal with slow networks—something that’s actually more of a challenge than dealing with no network connection at all.

There’s some good food for thought here.

I’m so excited to see how we can use Service Workers to improve the web.

Saturday, September 1st, 2018

Offline Content with Service Worker — Chris Ruppel

A step-by-step walkthrough of a really useful service worker pattern: allowing users to save articles for offline reading at the click of a button (kind of like adding the functionality of Instapaper or Pocket to your own site).

Thursday, August 16th, 2018

On HTTPS and Hard Questions - TimKadlec.com

A great post by Tim following on from the post by Eric I linked to last week.

Is a secure site you can’t access better than an insecure one you can?

He rightly points out that security without performance is exlusionary.

…we’ve made a move to increase the security of the web by doing everything we can to get everything running over HTTPS. It’s undeniably a vital move to make. However this combination—poor performance but good security—now ends up making the web inaccessible to many.

Security. Performance. Accessibility. All three matter.

Tuesday, August 7th, 2018

Securing Web Sites Made Them Less Accessible – Eric’s Archived Thoughts

This is a heartbreaking observation by Eric. He’s not anti-HTTPS by any stretch, but he is pointing out that caching servers become a thing of the past on a more secure web.

Can we do anything? For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand. So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.

Tuesday, July 10th, 2018

When 7 KB Equals 7 MB - Cloud Four

I remember Jason telling me about this weird service worker caching behaviour a little while back. This piece is a great bit of sleuthing in tracking down the root causes of this strange issue, followed up with a sensible solution.

Thursday, July 5th, 2018

The trimCache function in Going Offline

Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.

It’s the trimCache function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName) to a specified amount (maxItems). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:

 function trimCache(cacheName, maxItems) {
   cacheName.open( cache => {
     cache.keys()
     .then( items => {
       if (items.length > maxItems) {
         cache.delete(items[0])
         .then(
           trimCache(cacheName, maxItems)
         ); // end delete then
       } // end if
     }); // end keys then
   }); // end open
 } // end function

See the problem? It’s right there at the start when I try to open the cache like this:

cacheName.open( cache => {

That won’t work. The open method only works on the caches object—I should be passing the name of the cache into the caches.open method. So the code should look like this:

caches.open( cacheName )
.then( cache => {

Everything else remains the same. The corrected trimCache function is here:

function trimCache(cacheName, maxItems) {
  caches.open(cacheName)
  .then( cache => {
    cache.keys()
    .then(items => {
      if (items.length > maxItems) {
        cache.delete(items[0])
        .then(
          trimCache(cacheName, maxItems)
        ); // end delete then
      } // end if
    }); // end keys then
  }); // end open then
} // end function

Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.

You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.

Update: There was another error in the code for trimCache! Here’s the fix.

Tuesday, June 5th, 2018

Clearleft.com is a progressive web app

What’s that old saying? The cobbler’s children have no shoes that work offline. Or something.

It’s been over a year since the Clearleft site relaunched and I listed some of the next steps I had planned:

Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step.

You know how it is. Those no-brainer tasks are exactly the kind of thing that end up on a to-do list without ever quite getting to-done. Meanwhile I’ve been writing and speaking about how any website can be a progressive web app. I think Alanis Morissette used to sing about this sort of situation.

Enough is enough! Clearleft.com is now a progressive web app. It has a manifest file and a service worker script.

The service worker logic is fairly straightforward, and taken almost verbatim from Going Offline. As you navigate around the site, the service worker applies different logic depending on the kind of file you’re requesting:

  • Pages are served fresh from the network, falling back to the cache when there’s a problem.
  • Everything else is served from the cache where possible, resorting to the network only if there’s no match in the cache—quite the performance boost!

In both cases, if a page or a file is retrieved from the network, it’s gets put into a cache. I’ve got one cache for pages, and another for everything else. And even if a file is retrieved from that cache, I still fire off a fetch request to grab a fresh copy for the cache. So while there’s a chance that a stale file might be served up, it will only ever be slightly stale, and the next time it’s requested, it’ll be fresh.

In the worst-case scenario, when a page can’t be retrieved from the network or the cache, you end up seeing a custom offline page. There you can see a list of any pages that are cached (meaning you can revisit them even without an internet connection).

A custom offline page showing a list of URLs.

It’s not ideal—page titles would be friendlier than URLs—but it’s a start. I’m sure I’ll revisit it soon. Honest.

Oh, and after a year of procrastinating about doing this, guess how long it took? About half a day. Admittedly, this isn’t my first progressive web app, and the more you build ‘em, the easier it gets. Still, it’s a classic example of a small investment of time leading to a big improvement in performance and user experience.

If you think your company’s website could benefit from being a progressive web app (and believe me, it definitely could), you have a couple of options:

  1. Arm yourself with a copy of Going Offline and give it a go yourself. Or
  2. Get in touch with Clearleft. We can help you. (See, I can say that with a straight face now that we’re practicing what we preach.)

Either way, don’t dilly dally …like I did.

Fresher service workers, by default

“Ah, this is good news!”, I thought, reading this update about how service worker scripts won’t be cached.

And that was the moment when I realised what an utter nerd I had become.