Tags: code

42

sparkline

Exploring web technologies

Last week, I had two really enjoyable experiences discussing completely opposite ends of the web technology stack.

Tuesday is Codebar day here in Brighton. Clearleft hosted it at 68 Middle Street last week. I really, really enjoy coaching at Codebar. I particularly like teaching the absolute basics of HTML and CSS. There’s something so rewarding about seeing the “a-ha!” moments when concepts click with people. I also love answering the inevitable questions that arise, like “why is it like that?”, or “how do I do this?”

Fantastic coding tonight! Great to see you all. Thanks for coming and thanks @68MiddleSt & @clearleft for having us.

Thursday was devoted to the opposite end of the spectrum. I ran a workshop at Clearleft with some developers from one of our clients. The whole day was dedicated to exploring and evaluating up-and-coming web technologies. Basically, it was a chance to geek out about all the stuff I’ve been linking to or writing about. During the workshop I ended up making a lot of use of my tagging system here on adactio.com:

Prioritising topics for discussion.

Web components and service workers ended up at the top of the list of technologies to tackle, which was fortuitous, given my recent thoughts on comparing the two:

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Those two questions turned out to be a good framework for the whole workshop. The question of how to evaluate technologies is something I’ve been thinking about a lot lately. I’m pretty sure it will be what my next conference talk is going to be all about.

You can read more about the structure of the workshop over on the Clearleft site. I’m looking forward to running it again sometime. But I’m equally looking forward to getting back to the basics at the next Codebar.

Class teacher

ES6 introduced a whole bunch of new features to JavaScript. One of those features is the class keyword. This introduction has been accompanied by a fair amount of concern and criticism.

Here’s the issue: classes in JavaScript aren’t quite the same as classes in other programming languages. In fact, technically, JavaScript doesn’t really have classes at all. But some say that technically isn’t important. If it looks like a duck, and quacks like a duck, shouldn’t we call it a duck even if technically it’s somewhat similar—but not quite the same—species of waterfowl?

The argument for doing this is that classes are so familiar from other programming languages, that having some way of using classes in JavaScript—even if it isn’t technically the same as in other languages—brings a lot of benefit for people moving over to JavaScript from other programming languages.

But that comes with a side-effect. Anyone learning about classes in JavaScript will basically be told “here’s how classes work …but don’t look too closely.”

Now if you believe that outcomes matter more than understanding, then this is a perfectly acceptable trade-off. After all, we use computers every day without needing to understand the inner workings of every single piece of code under the hood.

It doesn’t sit well with me, though. I think that understanding how something works is important (in most cases). That’s why I favour learning underlying technologies first—HTML, CSS, JavaScript—before reaching for abstractions like frameworks and libraries. If you understand the way things work first, then your choice of framework, library, or any other abstraction is an informed choice.

The most common way that people refer to the new class syntax in JavaScript is to describe it as syntactical sugar. In other words, it doesn’t fundamentally introduce anything new under the hood, but it gives you a shorter, cleaner, nicer way of dealing with objects. It’s an abstraction. But because it’s an abstraction taken from other programming languages that work differently to JavaScript, it’s a bit of fudge. It’s a little white lie. The class keyword in JavaScript will work just fine as long as you don’t try to understand it.

My personal opinion is that this isn’t healthy.

I’ve come across two fantastic orators who cemented this view in my mind. At Render Conf in Oxford earlier this year, I had the great pleasure of hearing Ashley Williams talk about the challenges of teaching JavaScript. Skip to the 15 minute mark to hear her introduce the issues thrown up classes in JavaScript.

More recently, the mighty Kyle Simpson was on an episode of the JavaScript Jabber podcast. Skip to the 17 minute mark to hear him talk about classes in JavaScript.

(Full disclosure: Kyle also some very kind things about some of my blog posts at the end of that episode, but you can switch it off before it gets to that bit.)

Both Ashley and Kyle bring a much-needed perspective to the discussion of language design. That perspective is the perspective of a teacher.

In his essay on W3C’s design principles, Bert Bos lists learnability among the fundamental driving forces (closely tied to readability). Learnability and teachability are two sides of the same coin, and I find it valuable to examine any language decisions through that lens. With that mind, introducing a new feature into a language that comes with such low teachability value as to warrant a teacher actively telling a student not to learn how things really work …well, that just doesn’t seem right.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Cache-limiting in Service Workers …again

Okay, so remember when I was talking about cache-limiting in Service Workers?

It wasn’t quite working:

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works.

Sure enough, I was doing something wrong. Thanks to Brandon Rozek and Jonathon Lopes for talking me through the problem.

In a nutshell, I’m mixing up synchronous instructions (like “delete the first item from a cache”) with asynchronous events (pretty much anything to do with fetching and caching with Service Workers).

Instead of trying to clean up a cache at the same time as I’m adding a new item to it, it’s better for me to have clean-up function to run at a different time. So I’ve written that function:

var trimCache = function (cacheName, maxItems) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length > maxItems) {
                        cache.delete(keys[0])
                            .then(trimCache(cacheName, maxItems));
                    }
                });
        });
};

But now the question is …when should I run this function? What’s a good event to trigger a clean-up? I don’t think the activate event is going to work. I probably want something like background sync but I don’t think that’s quite ready for primetime yet.

In the meantime, if you can think of a good way of doing a periodic clean-up like this, please let me know.

Anyone? Anyone? Bueller?

In other Service Worker news, I’ve added a basic Service Worker to The Session. It caches static caches—CSS and JavaScript—and keeps another cache of site section index pages topped up. If the network connection drops (or the server goes down), there’s an offline page that gives a few basic options. Nothing too advanced, but better than nothing.

Update: Brandon has been tackling this problem and it looks like he’s found the solution: use the page load event to fire a postMessage payload to the active Service Worker:

window.addEventListener('load', function() {
    if (navigator.serviceWorker.controller) {
        navigator.serviceWorker.controller.postMessage({'command': 'trimCaches'});
    }
});

Then inside the Service Worker, I can listen for that message event and run my cache-trimming function:

self.addEventListener('message', function(event) {
    if (event.data.command == 'trimCaches') {
        trimCache(pagesCacheName, 35);
        trimCache(imagesCacheName, 20);
    }
});

So what happens is you visit a page, and the caching happens as usual. But then, once the page and all its assets are loaded, a message is fired off and the caches get trimmed.

I’ve updated my Service Worker and it looks like it’s working a treat.

Cache-limiting in Service Workers

When I was documenting my first Service Worker I mentioned that every time a user requests a page, I store that page in a cache for later (offline) use:

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

Well I’ve done that now. Here’s the updated Service Worker code.

I’ve got a function in there called stashInCache that takes a few arguments: which cache to use, the maximum number of items that should be in there, the request (URL), and the response:

var stashInCache = function(cacheName, maxItems, request, response) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length < maxItems) {
                        cache.put(request, response);
                    } else {
                        cache.delete(keys[0])
                            .then(function() {
                                cache.put(request, response);
                            });
                    }
                })
        });
};

It looks to see if the current number of items in the cache is less than the specified maximum:

if (keys.length < maxItems)

If so, go ahead and cache the item:

cache.put(request, response);

Otherwise, delete the first item from the cache and then put the item in the cache:

cache.delete(keys[0])
  .then(function() {
    cache.put(request, response);
  });

For HTML requests, I limit the cache to 35 items:

var copy = response.clone();
var cacheName = version + pagesCacheName;
var maxItems = 35;
stashInCache(cacheName, maxItems, request, copy);
return response;

For images, I’m limiting the cache to 20 items:

var copy = response.clone();
var cacheName = version + imagesCacheName;
var maxItems = 20;
stashInCache(cacheName, maxItems, request, copy);
return response;

Here’s my updated Service Worker.

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works. If you can spot what I’m doing wrong, please let me know.

My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.

register

From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'
  });
}

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.

install

I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {
  event.waitUntil(updateStaticCache());
});

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
};

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      cache.addAll([
        '/path/to/somefile',
        '/path/to/someotherfile'
      ]);
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
}

Another option is to use completely different caches, but I’ve decided to just use one cache for now.

activate

When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
  event.waitUntil(
    caches.keys()
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          })
          .map(function (key) {
            return caches.delete(key);
          })
        );
      })
  );
});

fetch

The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  ...
});

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
  );
  return;
}

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
          .catch(function () {
              return caches.match('/offline');
          })
  );
  return;
}

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:

event.respondWith(
  fetch(request)
);

If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);
})

So now the whole event.respondWith statement looks like this:

event.respondWith(
  fetch(request)
    .catch(function () {
      return caches.match(request)
    })
);

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');
  })

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');
  })

With that cleared up, my code for handing HTML requests looks like this:

event.respondWith(
  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');
        })
    })
);

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);
  });

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  event.respondWith(
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
          });
        return response;
      })
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');
          })
      })
  );
  return;
}

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:

event.respondWith(
  caches.match(request)
);

Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request);
    })
);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});
  }
})

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          }
        })
    })
);

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
    event.respondWith(
      ... 
    );
    return;
  }
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
      ...
    );
    return;
  }
  // Non-HTML requests
  event.respondWith(
    ...
  );
});

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:

fetch(request);

…is functionally equivalent to this:

fetch(request)
  .then(function (response) {
    return response;
  });

That’s probably obvious but it took me a while to realise. Likewise:

caches.match(request);

…is the same as:

caches.match(request)
  .then(function (response) {
    return response;
  });

Here’s another thing… you’ll notice that sometimes I’ve used:

fetch(request);

…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Updates

I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"
</FilesMatch>
Credentials

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:

fetch(request);
Scope

I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'
  });
}

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.

Syndicating to Medium

When I brainpuked my thoughts on Google’s AMP project, I finished up by saying it was one more option for the Indie Web approach to syndication:

When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

Until Medium provided an API, I didn’t see much point in Medium. Let me clarify: I didn’t see much point in it for me. I’ve already got a website where I can publish whatever I like. For someone who doesn’t have their own website, I guess Medium—like Facebook, Twitter, Tumblr, etc.—provides a place to publish. I think this is what people mean when they use the word “platform” in a digital—rather than a North Sea oil drilling—sense.

Publishing exclusively on somebody else’s site works pretty well right up until the day the platform turns out to be a trap door and disappears from under you.

But I’m really puzzled by people who already have their own website choosing to publish on Medium instead. A shiny content farm is still a content farm.

“It’s the reach!” I’m told. That makes me sad. The whole point of the World Wide Web is that everybody has an equal opportunity to share their thoughts. You don’t need to ask anyone for permission. The gatekeepers of the previous century—record labels, book publishers, film producers—can’t stop you from making whatever you want and putting it out there for the world to see. And thanks to the principle of net neutrality baked into the design of TCP/IP, no one gets preferential treatment.

Notice that I said “people who already have their own website choosing to publish on Medium instead.” That last bit is important. Using Medium to publish copies of what you’ve already published on your own site gives you the best of both worlds: ownership and reach. That’s what Kevin does, for example. And Jeffrey. Until recently that was quite a pain in the ass, requiring a manual copy’n’paste process.

Back when Medium first launched, Dave Winer said:

Let me enter the URL of something I write in my own space, and have it appear here as a first class citizen. Indistinguishable to readers from something written here.

It still isn’t quite that simple, but now that Medium has a publishing API, it’s relatively straightforward to syndicate copies of your posts to Medium at the moment you publish on your own site.

Here’s what I did…

First of all, I signed up for a Medium account. For the longest time, even this simple step was off-limits for me because Medium used to require authentication using Twitter. By itself, that’s not a problem. The problem was that Medium demanded write permissions for my Twitter account. Just say no.

Now it’s possible to sign up for Medium using email so that rudeness is less of an issue (although I’d really like to see Medium stop being so demanding when it comes to Twitter permissions, especially as the interface copy bends over backwards to promise that Medium would never post to Twitter on my behalf …so why ask for permission to do just that?).

Once I had a Medium account, I needed two pieces of secret information in order to use the API.

The first piece is an access token.

I went to my settings on Medium and scrolled all the way to the bottom to the heading “Integration tokens”. I entered a description (“Syndication from adactio.com”) and pressed the “Get integration token” button.

Now I could use that token to get the second piece of information: my user ID.

I opened up a browser tab and went to this URL: https://api.medium.com/v1/me?accessToken= …adding my new secret integration token to the end.

That returns a JSON response. One of the fields in the JSON object has the name “id”. The value of that field is my user ID on Medium.

With those two pieces of information, I could make an authenticated POST request using cURL. Here’s the PHP code I’m using. It’s probably terrible but please feel free to use it, copy it, fork it, or do anything else you want with it.

When I run that code, I get a JSON response back from Medium’s API. Assuming I get a successful response, I can store the URL of the Medium copy and link out to it from here. That copy on Medium has a corresponding link rel="canonical" in the head of the document pointing back here to adactio.com.

That’s pretty much it. I added a checkbox to my posting interface so that sending a copy of a post to Medium is just a toggle away. I’ll tick that checkbox when I post this. You could be reading this on my site or you could be reading the copy on Medium.

The code I wrote is pretty similar to how I post notes to Twitter and photos to Flickr. In fact, posting to Medium is more straightforward: Flickr requires three bits of secret information; Twitter requires four.

What would make this cross-posting with Medium really interesting would be if it could work in both directions. Then I’d be able to use the (very nice) writing interface on Medium to publish on adactio.com.

That’s not so far-fetched. I’ve already got a micropub endpoint here on my site (here’s the code). That’s how I’m able to use Instagram to post photos to my own site (using OwnYourGram). I let Instagram keep a copy of my photo. I’d be happy to let Medium keep a copy of my post.

We could make history:

We need to break out of the model where all these systems are monolithic and standalone. There’s art in each individual system, but there’s a much greater art in the union of all the systems we create.

Rosa and Dot

Today is October 13th. It is Ada Lovelace Day:

Ada Lovelace Day is an international celebration of the achievements of women in science, technology, engineering and maths (STEM).

Today is also a Tuesday. That means that Codebar is happening this evening in Brighton:

Codebar is a non-profit initiative that facilitates the growth of a diverse tech community by running regular programming workshops.

The Brighton branch of Codebar is run by Rosa, Dot, and Ryan.

Rosa and Dot are Ruby programmers. They’ve poured an incredible amount of energy into making the Brighton chapter of Codebar such a successful project. They’ve built up a wonderful, welcoming event where everyone is welcome. Whenever I’ve participated as a coach, I’ve always found it be an immensely rewarding experience. For that, and for everything else they’ve accomplished, I thank them.

Brighton is lucky to have them.

Brighton in September

I know I say this every year, but this month—and this week in particular—is a truly wonderful time to be in Brighton. I am, of course, talking about The Brighton Digital Festival.

It’s already underway. Reasons To Be Creative just wrapped up. I managed to make it over to a few talks—Stacey Mulcahey, Jon, Evan Roth. The activities for the Codebar Code and Chips scavenger hunt are also underway. Tuesday evening’s event was a lot of fun; at the end of the night, everyone wanted to keep on coding.

I popped along to the opening of Georgina’s Familiars exhibition. It’s really good. There’s an accompanying event on Saturday evening called Unfamiliar Matter which looks like it’ll be great. That’s the same night as the Miniclick party though.

I guess clashing events are unavoidable. Like tonight. As well as the Guardians Of The Galaxy screening hosted by Chris (that I’ll be going to), there’s an Async special dedicated to building a 3D Lunar Lander.

But of course the big event is dConstruct tomorrow. I’m really excited about it. Partly that’s because I’m not the one organising it—it’s all down to Andy and Kate—but also because the theme and the line-up is right up my alley.

Andy has asked me to compere the event. I feel a little weird about that seeing as it’s his baby, but I’m also honoured. And, you know, after talking to most of the speakers for the podcast—which I enjoyed immensely—I feel like I can give an informed introduction for each talk.

I’m looking forward to this near future event.

See you there.

100 words 093

There are many web-related community events in Brighton. There’s something going on pretty much every evening. One of my favourite events is Codebar. It happens every Tuesday, rotating venues between various agency offices. Tonight it was Clearleft’s turn again.

At the start of the evening, students and teachers get paired up. I was helping some people with HTML and CSS as they worked their way through the tutorials.

It’s a great feeling to watch things “click”; seeing someone making their first web page, style their first element, and write their first hyperlink—the very essence of the World Wide Web.

100 words 074

We had an epic front-end pow-wow today. With plenty of Beerleft Goldenrods on hand we ploughed through discussing current client work and then turned to our guests. Today we were joined by Tracy Osborn, who told us all about her lovely new self-published book, Hello Web App. Then we got a demo from our friends at the confusingly named Ind.ie—no relation to the indie web—who gave us a demo of what they’ve been working on. We gave our feedback, including a heartfelt plea to dial down the rhetoric in their public pronouncements.

Then we went to the beach.

Small independent pieces, loosely joined

It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.

For example, there are already webmention and micropub plug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.

I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.

This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.

By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:

I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.

It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.

Save the dates.

100 words 051

I’ve been thinking about what Harry said recently about logic in CSS. I think he makes an astute observation.

You can think of each part of a selector as a condition:

condition { }

That translates to code like:

if (condition) { }

So if you have a CSS selector like this:

condition1 condition2 condition3 { }

…it translates to code like this:

if (condition1) {
    if (condition2) {
        if (condition3) {
        }
    }
}

That doesn’t feel very elegant, even in its simpler form:

if (condition1 && condition2 && condition3) { }

I like Harry’s rule of thumb:

Think of your selectors as mini programs: Every time you nest or qualify, you are adding an if statement; read these ifs out loud to yourself to try and keep your selectors sane.

100 words 044

It was Clearleft’s turn to host Codebar again this evening. As always, it was great. I did my best to introduce some people to HTML and CSS, which was challenging, rewarding, and fun.

In the run-up to the event, I did a little spring cleaning of Clearleft’s bookshelves. I took some books on HTML, CSS, and JavaScript that weren’t being used any more and offered them to Codebar students for the taking.

I was also able to offer some more contemporary books thanks to the generosity of A Book Apart who kindly donated some of their fine volumes to Codebar.

BEMphasis

I’m working on a project with a team of developers who are trying out the BEM syntax for their class names. I’ve tried BEM before, but I’m not a huge fan of underscores (for no particularly good reason) so I tend to use a modified version that avoids those characters. Still, when it comes to coding style—tabs vs. spaces, camelCasing, underscores, hyphens, or whatever—my personal opinion takes a back seat to the group consensus. And on this project, the group has opted for proper BEM all the way, and I’m more than happy to go along with that.

When it comes to naming a modified version of a component in BEM, the syntax looks like this:

component--modifier

That raises a question about how you then deploy that class name in your HTML. You could just use the modified name:

<element class="component--modifier">

But then in your CSS you’d have to repeat all the style rules for .component selector inside your rule block for .component--modifier selector. SASS could you help out here, especially with its “extends” functionality, but the final CSS is still going to containing duplicated rules.

The alternative is to keep your CSS lean and modular, and write your HTML like this:

<element class="component component--modifier">

Now you’ve taken the duplication out of CSS and put into your markup. It looks a little weird. But, on balance, it’s probably the lesser of two evils.

It strikes me that this pattern of always having the base component class name appear anywhere you have a component--modifier class name is something that you could programmatically check for. It should be relatively straightforward to write a lint tool that looks in the value of every class attribute and, if it finds any instances of foo--bar, checks to make sure that foo is also in there.

Sounds like it could be a nice little task for Grunt or Gulp. Maybe somebody has already made it.

Mind you, it seems that most lint tools out there are focused very much on enforcing a coding style for CSS and JavaScript—not so much for HTML. I worry that this reflects the mindset of many front-end developers who view CSS and JavaScript as more important than markup …which is a bit odd considering that CSS and JavaScript are subservient to the HTML document that they’re styling and scripting.

100 words 032

We have a regular gathering at Clearleft every Thursday at 4pm. It’s our front-end pow-wow (there’s a corresponding “UX Laundromat” on Thursdays at 3pm, and every Friday at 4pm there’s a “Design …Thing”).

It’s basically like a design crit, but for code. People show what they’ve been working, whether it’s client work or personal projects. It leads to some great cross-pollination of ideas and solutions.

I wrap it up by going through links I’ve tagged with “frontend”.

Everyone’s welcome to come along, whether they’re a front-end developer or not. If any clients are in the office, they’re invited along too.

Codebar Brighton

There’s been a whole series of events going on in Brighton this month under the banner of Spring Forward:

Spring Forward is a month-long celebration of the role of women in digital culture and runs throughout March in parallel with Women’s History Month.

Luckily for me, a lot of the events have been happening at 68 Middle Street—home of Clearleft—so I’ve been taking full advantage of as many as I can (also, if I go to an event that means that Tessa doesn’t have to stick around every night of the week to lock up afterwards). Charlotte has been going to even more.

I managed to get to Tech In Ten—run by She Codes Brighton—which was great, but I missed out on Pixels and Prosecco by Press Fire To Win which sounded like it was a lot of fun. And there are more events still to come, like She Says and Ladies That UX.

What’s great about Spring Forward events like She Codes, 300 Seconds, She Says, and Ladies That UX is that they aren’t one-offs; they’re happening all-year round, along with other great regular Brighton events like Async and UX Brighton.

And then there’s Codebar. I had heard about Codebar before, but Spring Forward was the first chance I had to get stuck in—it was being hosted at 68 Middle Street, so I said I’d stick around to lock up afterwards. I’m so glad I did. It was great!

In a nutshell, Codebar offers a chance for people who are under-represented in the world of programming and technology to get some free training by pairing them with tutors who volunteer their time. I offered to help out anyone who was learning HTML and CSS (after tamping down the inevitable inner voice of imposter syndrome that was asking “who are you to be teaching anyone anything?”).

I really, really enjoyed it. It was so nice to meet people from outside the world of web design and development. It was also a terrific reminder that the act of making websites is something that everybody should be able to participate in. This is for everyone.

Codebar Brighton takes place once a week, changing up the venue on rotation. As you can imagine, it takes a lot of work to maintain that momentum. It’s thanks to the tireless efforts of the seemingl indefatigable Ruby programmers Rosa and Dot that it’s such a great success. I am in their debt.