Tags: async

5

sparkline

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Brighton in September

I know I say this every year, but this month—and this week in particular—is a truly wonderful time to be in Brighton. I am, of course, talking about The Brighton Digital Festival.

It’s already underway. Reasons To Be Creative just wrapped up. I managed to make it over to a few talks—Stacey Mulcahey, Jon, Evan Roth. The activities for the Codebar Code and Chips scavenger hunt are also underway. Tuesday evening’s event was a lot of fun; at the end of the night, everyone wanted to keep on coding.

I popped along to the opening of Georgina’s Familiars exhibition. It’s really good. There’s an accompanying event on Saturday evening called Unfamiliar Matter which looks like it’ll be great. That’s the same night as the Miniclick party though.

I guess clashing events are unavoidable. Like tonight. As well as the Guardians Of The Galaxy screening hosted by Chris (that I’ll be going to), there’s an Async special dedicated to building a 3D Lunar Lander.

But of course the big event is dConstruct tomorrow. I’m really excited about it. Partly that’s because I’m not the one organising it—it’s all down to Andy and Kate—but also because the theme and the line-up is right up my alley.

Andy has asked me to compere the event. I feel a little weird about that seeing as it’s his baby, but I’m also honoured. And, you know, after talking to most of the speakers for the podcast—which I enjoyed immensely—I feel like I can give an informed introduction for each talk.

I’m looking forward to this near future event.

See you there.

Async, Ajax, and animation

I hadn’t been to one of Brighton’s Async JavaScript meetups for quite a while, but I made it along last week. Now that it’s taking place at 68 Middle Street, it’s a lot easier to get to …seeing as the Clearleft office is right upstairs.

James Da Costa gave a terrific presentation on something called Pjax. In related news, it turns out that the way I’ve been doing Ajax all along is apparently called Pjax.

Back when I wrote Bulletproof Ajax, I talked about using Hijax. The basic idea is:

  1. First, build an old-fashioned website that uses hyperlinks and forms to pass information to the server. The server returns whole new pages with each request.
  2. Now, use JavaScript to intercept those links and form submissions and pass the information via XMLHttpRequest instead. You can then select which parts of the page need to be updated instead of updating the whole page.

So basically your JavaScript is acting like a dumb waiter shuttling requests for page fragments back and forth between the browser and the server. But all the clever stuff is happening on the server, not the browser. To the end user, there’s no difference between that and a site that’s putting all the complexity in the browser.

In fact, the only time you’d really notice a difference is when something goes wrong: in the Hijax model, everything just falls back to full-page requests but keeps on working. That’s the big difference between this approach and the current vogue for “single page apps” that do everything in the browser—when something goes wrong there, the user gets bupkis.

Pjax introduces an extra piece of the puzzle—which didn’t exist when I wrote Bulletproof Ajax—and that’s pushState, part of HTML5’s History API, to keep the browser’s URL updated. Hence, pushState + Ajax = Pjax.

As you can imagine, I was nodding in vigourous agreement with everything James was demoing. It was refreshing to find that not everyone is going down the Ember/Angular route of relying entirely on JavaScript for core functionality. I was beginning to think that nobody cared about progressive enhancement any more, or that maybe I was missing something fundamental, but it turns out I’m not crazy after all: James’s demo showed how to write front-end code responsibly.

What was fascinating though, was hearing why people were choosing to develop using Pjax. It isn’t necessarily that they care about progressive enhancement, robustness, and universal access. Rather, it’s often driven by the desire to stay within the server-side development environment that they’re comfortable with. See, for example, DHH’s explanation of why 37 Signals is using this approach:

So you get all the advantages of speed and snappiness without the degraded development experience of doing everything on the client.

It sounds like they’re doing the right thing for the wrong reasons (a wrong reason being “JavaScript is icky!”).

A lot of James’s talk was focused on the user experience of the interfaces built with Hijax/Pjax/whatever. He had some terrific examples of how animation can make an enormous difference. That inspired me to do a little bit of tweaking to the Ajaxified/Hijaxified/Pjaxified portions of The Session.

Whenever you use Hijax to intercept a link, it’s now up to you to provide some sort of immediate feedback to the user that something is happening—normally the browser would take care of this (remember Netscape’s spinning lighthouse?)—but when you hijack that click, you’re basically saying “I’ll take care of this.” So you could, for example, display a spinning icon.

One little trick I’ve used is to insert an empty progress element.

Normally the progress element takes max and value attributes to show how far along something has progressed:

<progress max="100" value="75">75%</progress>

75%

But if you leave those out, then it’s an indeterminate progess bar:

<progress>loading...</progress>

loading…

The rendering of the progress bar will vary from browser to browser, and that’s just fine. Older browsers that don’t understand the progress will display whatever’s between the opening and closing tags.

Voila! You’ve got a nice lightweight animation to show that an Ajax request is underway.

Layered

It’s been a busy week in Brighton. Tantek was in town for a few days, which is always a recipe for enjoyable shenanigans.

The latter half of the week has been a whirlwind of different events. There was a Skillswap on Wednesday and on Thursday, I gave a talk at the Async meet-up, which was quite productive. It gave me a chance to marshall some of my thoughts on responsive enhancement.

The week finished with Layer Tennis. I was honoured—and somewhat intimidated—to be asked to provide the commentary for the Moss vs. Whalen match. Holy crap! Those guys are talented. I mean, I knew that anyway but to see them produce the goods under such a tight deadline was quite something.

Meanwhile, I just blathered some words into a textarea. When it was all done, I read back what I had written and it’s actually not that bad:

  1. There Will Be Blood
  2. Pukeworthy
  3. Plastered
  4. Bacon Nation
  5. Zoom In. Now Enhance.
  6. It Ain’t Meat, Babe
  7. Longpork Is For Closers
  8. Bass. How Low Can You Go?
  9. Dead Rising
  10. Troll Man
  11. Craven Applause

It was a somewhat stressful exercise in writing on demand, but it was a fun way to finish up the week.

Now, however, I must pack a bag and fly to San Diego. No rest for the wicked

JavaScript jamboree

It’s been a fun post-dConstruct week. Tantek has been staying in Brighton being, as always, the perfect guest. On his last night in the country, we went along to Async, the local JavaScript twice-monthly meet-up, host to a show’n’tell this time ‘round.

Tantek demoed his Cassis project. It’s nuts. He’s creating polyglot scripts that are simultaneously JavaScript and PHP, as well as having the ability to report which context they are running in. I feel partly responsible for this madness: he got the idea the last time he was in Brighton after reading Bulletproof Ajax and deciding that he didn’t want to write the same logic twice in two different programming languages. The really crazy thing is that he’s got it working.

Prem, who organised the event, showed his Sandie code: a security mechanism that allows external scripts to be loaded and arbitrary JavaScript to be executed without affecting the global scope. It’s smart stuff that could be incredibly useful for his sqwidget work.

Mark demoed one of the coolest bookmarklets I’ve seen in a while: Snoopy:

It’s intended for use on mobile browsers where you can’t view-source to poke around under the hood of sites to see how they’re built.

If the lack of “view source” on the iPad and iPhone has been bothering you, Snoopy is your friend.

Alas, we had to leave the Async awesomeness early to rendez-vous with the Mozilla HTML5 meet-up in The Eagle so I didn’t even get to see Jim demo the disco snake that he made at Music Hack Day last weekend.