Tags: work

64

sparkline

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">
</iframe>

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/serviceworker.js');
}
</script>
</head>
</html>

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
    
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.
    

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

The web on my phone

It’s funny how times have changed. Remember back in the 90s when Microsoft—quite rightly—lost an anti-trust case? They were accused of abusing their monopolistic position in the OS world to get an unfair advantage in the browser world. By bundling a copy of Internet Explorer with every copy of Windows, they were able to crush the competition from Netscape.

Mind you, it was still possible to install a Netscape browser on a Windows machine. Could you imagine if Microsoft had tried to make that impossible? There would’ve been hell to pay! They wouldn’t have had a legal leg to stand on.

Yet here we are two decades later and that’s exactly what an Operating System vendor is doing. The Operating System is iOS. It’s impossible to install a non-Apple browser onto an Apple mobile computer. For some reason, the fact that it’s a mobile device (iPhone, iPad) makes it different from a desktop-bound device running OS X. Very odd considering they’re all computers.

“But”, I hear you say, “What about Chrome for iOS? Firefox for iOS? Opera for iOS?”

Chrome for iOS is not Chrome. Firefox for iOS is not Firefox. Opera for iOS is not Opera. They are all using WebKit. They’re effectively the same as Mobile Safari, just with different skins.

But there won’t be any anti-trust case here.

I think it’s a real shame. Partly, I think it’s a shame because as a developer, I see an Operating System being let down by its browser. But mostly, I think it’s a shame because I use an iPhone and I’m being let down by its browser.

It’s kind of ironic, because when the iPhone first launched, it was all about the web apps. Remember, there was no App Store for the first year of the iPhone’s life. If you wanted to build an app, you had to use web technologies. Apple were ahead of their time. Alas, the web technologies weren’t quite up to the task back in 2007. These days, though, there are web technologies landing in browsers that are truly game-changing.

In case you hadn’t noticed, I’m very excited about Service Workers. It’s doubly exciting to see the efforts the Chrome on Android team are making to make the web a first-class citizen. As Remy put it:

If I add this app to my home screen, it will work when I open it.

I’d like to be able to use Chrome, Firefox, or Opera on my iPhone—real Chrome, real Firefox, or real Opera; not a skinned version of Safari. Right now the only way for me to switch browsers is to switch phones. Switching phones is a pain in the ass, but I’m genuinely considering it.

Whereas I’m all talk, Henrik has taken action. Like me, he doesn’t actually care about the Operating System. He cares about the browser:

Android itself bores me, honestly. There’s nothing all that terribly new or exciting here.’

save one very important detail…

IT’S CURRENTLY THE BEST MOBILE WEB APP PLATFORM

That’s true for now. The pole position for which browser is “best” is bound to change over time. The point is that locking me into one particular browser on my phone doesn’t sit right with me. It’s not very …webby.

I’m sure that Apple are not quaking in their boots at the thought of myself or Henrik switching phones. We are minuscule canaries in a very niche mine.

But what should give Apple pause for thought is the user experience they can offer for using the web. If they gain a reputation for providing a sub-par web experience compared to the competition, then maybe they’ll have to make the web a first-class citizen.

If I want to work towards that, switching phones probably won’t help. But what might help is following Alex’s advice in his answer to the question “What do we do about Safari?”:

What we do about Safari is we make websites amazing …and then they can’t not implement.

I’ll be doing that here on adactio, over on The Session (and Huffduffer when I get around to overhauling it), making progressively enhanced, accessible, offline-first, performant websites.

I’ll also be doing it at Clearleft. If you work at an organisation that wants a progressively-enhanced, accessible, offline-first, performant website, we should talk.

Design sprinting

James and I went to Ipswich last week for work. But this wasn’t part of an ongoing project—this was a short intense one-week feasibility study.

Leon from Suffolk Libraries got in touch with us about a project they’re planning to carry out soon: replacing their self-service machines with something more up-to-date. But rather than dive into commissioning the project straight away, he wisely decided to start with a one-week sprint to figure out exactly what the project would need to go ahead.

So that’s what James and I did. It was somewhat similar to the design sprint popularised by GV. We ensconced ourselves in the Ipswich library and packed a whole lot of work into five days. There was lots of collaboration, lots of sketching, lots of iterative design, and some rough’n’ready code. It was challenging, but a lot of fun. Also: we stayed in a pretty sweet AirBnB.

Our home for the week. This is a nice AirBnB.

You can read all about it in our case study. You can also read all about from Leon’s point of view on his blog:

I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.

I think this approach makes a lot of sense. By the end of the week, James and I felt pretty confident about estimating times and costs for the full project. Normally trying to estimate that kind of thing can be a real guessing game. But with the small of investment of one week’s worth of effort, you get a whole lot more certainty and confidence.

Have a look for yourself.

Handling redirects with a Service Worker

When I wrote about implementing my first Service Worker, I finished with this plea:

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Well, I ran into a gotcha that was really frustrating but thanks to the generosity of others, I was able to sort it out.

It was all because of an issue in Chrome. Here’s the problem…

Let’s say you’ve got a Service Worker running that takes care of any requests to your site. Now on that site, you’ve got a URL that receives POST data, does something with it, and then redirects to another URL. That’s a fairly common situation—it’s how I handle webmentions here on adactio.com, and it’s how I handle most add/edit/delete actions over on The Session to help prevent duplicate form submissions.

Anyway, it turns out that Chrome’s Service Worker implementation would get confused by that. Instead of redirecting, it showed the offline page instead. The fetch wasn’t resolving.

I described the situation to Jake, but rather than just try and explain it in 140 characters, I built a test case.

There’s a Chromium issue filed on this, and it will get fixed, but it in the meantime, it was really bugging me recently when I was rolling out a new feature on The Session. Matthew pointed out that the Chromium bug report also contained a workaround that he’s been using on traintimes.org.uk. Adrian also posted his expanded workaround in there too. That turned out to be exactly what I needed.

I think the problem is that the redirect means that a body is included in the GET request, which is what’s throwing the Service Worker. So I need to create a duplicate request without the body:

request = new Request(url, {
    method: 'GET',
    headers: request.headers,
    mode: request.mode == 'navigate' ? 'cors' : request.mode,
    credentials: request.credentials,
    redirect: request.redirect
});

So here’s what I had in my Service Worker before:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

And here’s what I have now:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    request = new Request(url, {
        method: 'GET',
        headers: request.headers,
        mode: request.mode == 'navigate' ? 'cors' : request.mode,
        credentials: request.credentials,
        redirect: request.redirect
    });
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

Now the test case is working just fine in Chrome.

On the off-chance that someone out there is struggling with the same issue, I hope that this is useful.

Share what you learn.

Service Worker notes

Here’s a disconnected hodge-podge of things related to Service Workers I’ve noticed recently…

Service Workers have landed in Firefox. When it was first unveiled in a nightly build a few people spotted some weird character issues on sites like mine that are using Service Workers, but that should all be fixed in the next release.

A while back I voted up Service Workers on Microsoft’s roadmap for Edge. I got an email today to say that the roadmap priority is high:

We intend to begin development soon.

We’re getting there.

Here’s a little gotcha that had me tearing my hair out until I tracked down the culprit: don’t use Header append Vary User-Agent in your site’s Apache config file. I don’t know why it snuck in there in the first place, but once I removed it, it fixed a weird issue that Aaron T. Grogg pointed out to me whereby my offline page would get cached, but not my CSS.

I really like this proposal for:

<link rel="serviceworker" href="/serviceworker.js">

It makes sense to me that I should be able to point to the Service Worker of a page in the same way that I point to a style sheet. It makes sense as a rel value too: “the linked resource has the relationship of ‘serviceworker’ to the current document.”

Also, I’m just generally a fan of declarative solutions. This feels like another good example of functionality that starts life in an imperative language (JavaScript) and then becomes declarative over time (see also: :hover, the required attribute, etc.).

Lyza wrote a fantastic article on Smashing Magazine that goes into all the details of her Service Worker. I must admit to feeling extremely gratified when she wrote:

First, I’m hugely indebted to Jeremy Keith for the implementation of service workers on his own website, which served as the starting point for my own code.

Going through her code, she made this remark:

Note: I use certain ECMAScript6 (or ES2015) features in the sample code for service workers because browsers that support service workers also support these features.

That’s a really good point. I haven’t messed around much with ES6 HipsterScript stuff partly because I haven’t yet set up a transpiler, so it was nice to know that my Service Worker is a “safe space” to try some stuff out in the browser. I refactored my JavaScript to use const, let, and =>.

Jake is looking for feedback on a specific part of Service Worker functionality around URLs. If I can wrap my head around what’s being described, I’ll chime in.

Finally, I had a nice little Service Worker moment earlier today. I was doing some updates on my web server that required a reboot. When I checked in Chrome to see how long adactio.com was down, I was surprised to see that the downtime appeared to be …zero. “That’s odd” I thought, “How can my site still be reachable if the server is …oh!” That’s when I realised I was seeing a cached version of my homepage. My Service Worker was doing it’s thing.

I had been thinking of Service Workers as a tool to help in situations where the user agent goes offline. But of course it’s an equally useful tool for when the server goes offline. This was a nice reminder of that.

Shadows and smoke

When I wrote about a year of learning with Charlotte, I made an off-hand remark in parentheses:

Hiring Charlotte was an experiment for Clearleft—could we hire someone in a “junior” position, and then devote enough time and resources to bring them up to a “senior” level? (those quotes are air quotes—I find the practice of labelling people or positions “junior” or “senior” to be laughably reductionist; you might as well try to divide the entire web into “apps” and “sites”).

It breaks my heart to see so many of my colleagues prefix their job titles “senior” (not least because it becomes completely meaningless when every single Visual Designer is also a “Senior Visual Designer”).

I remember being at a conference after-party a few years ago chatting to a very talented front-end developer. She wasn’t happy with where she was working. I advised to get a job somewhere else After all, she lived and worked in San Francisco, where her talents are in high demand. But she was hesitant.

“They’ve promised me that in a few more months, my job title would become ‘Senior Developer’”, she said. “Ah, right,” I said, “and what happens then?” “Well”, she said, “I get to have the word ‘senior’ on my resumé.” That was it. No pay rise. No change in responsibilities. Just a word on a piece of paper.

I had always been suspicious of job titles, but that exchange put me over the edge. Job titles can be downright harmful.

Dan recently wrote about the importance of job titles. I love Dan, but I couldn’t disagree with him more in this instance.

He cite two situations where he believes job titles have value:

Your title tells your colleagues how to interact with you.

No. Talking to your colleagues tells your colleagues how to interact you. Job titles attempt to short-cut that. They do a terrible job of it.

What you need to know are the verbs that your colleagues are adept in: designing, developing, thinking, communicating, facilitating …all of that gets squashed down into one reductionist noun like “Copywriter” or “Designer”.

At Clearleft, we’ve recently started kicking off projects with an exercise called “Fuzzy Edges” that Boxman has been refining. In it, we look ahead to all the upcoming project roles (e.g. “Who will lead playbacks and demos?”, “Who will run stakeholder interviews?”, “Who will lead design direction?”). Together, everyone on the project comes to a consensus on who has which roles.

It’s really, really important to clarify these roles at the start of each project, and it’s exactly the kind of thing that can’t be summed up in a job title. In fact, the existence of job titles can lead to harmful assumptions like “Oh, I figured you were leading playbacks and demos!” or “Oh, I assumed they were running stakeholder interviews!”, or worse: “Hey, you can’t lead design direction because that’s not in your job title!”

The role assignments can vary hugely from project to project, which is great. People are varied and multi-faceted. Trying to force the same people into the same roles over and over again would be demoralising and counter-productive. I fear that’s exactly what job titles do—they reinforce barriers.

Here’s the second reason Dan gives for the value of job titles:

Your title tells your clients how to interact with you.

Again, no. Talking to your clients tells your clients how to interact with you.

Dan illustrates his point by recounting a tale of deception, demonstrating that a well-placed lie about someone’s job title can mollify the kind of people who place great stock in job titles. That’s not solving the real problem. Again, while job titles might appear to be shortcuts to a shared understanding, they’re actually more like façades covering up trapdoors.

In recounting the perceived value of job titles, there’s an assumption that the titles were arrived at fairly. If someone’s job title is “Senior Designer” and someone’s job title is “Junior Designer”, then the senior person must be the better, more experienced designer, right?

But that isn’t always the case. And that’s when job titles go from being silly pointless phrases to being downright damaging, causing real harm.

Over on Rands in Repose, there’s a great post called Titles are Toxic. His experience mirrors mine:

Never in my life have I ever stared at a fancy title and immediately understood the person’s value. It took time. I spent time with those people — we debated, we discussed, we disagreed — and only then did I decide: “This guy… he really knows his stuff. I have much to learn.” In Toxic Title Douchebag World, titles are designed to document the value of an individual sans proof. They are designed to create an unnecessary social hierarchy based on ego.

See? There’s no shortcut for talking to people. Job titles are an attempt to cut out one of the most important aspects of humans working together.

The unspoken agreement was that these titles were necessary to map to a dimwitted external reality where someone would look at a business card and apply an immediate judgement on ability based on title. It’s absurd when you think about it – the fact that I’d hand you a business card that read “VP” and you’d leap to the immediate assumption: “Since his title is VP, he must be important. I should be talking to him”. I understand this is how a lot of the world works, but it’s precisely this type of reasoning that makes titles toxic.

So it’s not even that I think that job titles are bad at what they’re trying to do …I think that what they’re trying to do is bad.

Cache-limiting in Service Workers …again

Okay, so remember when I was talking about cache-limiting in Service Workers?

It wasn’t quite working:

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works.

Sure enough, I was doing something wrong. Thanks to Brandon Rozek and Jonathon Lopes for talking me through the problem.

In a nutshell, I’m mixing up synchronous instructions (like “delete the first item from a cache”) with asynchronous events (pretty much anything to do with fetching and caching with Service Workers).

Instead of trying to clean up a cache at the same time as I’m adding a new item to it, it’s better for me to have clean-up function to run at a different time. So I’ve written that function:

var trimCache = function (cacheName, maxItems) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length > maxItems) {
                        cache.delete(keys[0])
                            .then(trimCache(cacheName, maxItems));
                    }
                });
        });
};

But now the question is …when should I run this function? What’s a good event to trigger a clean-up? I don’t think the activate event is going to work. I probably want something like background sync but I don’t think that’s quite ready for primetime yet.

In the meantime, if you can think of a good way of doing a periodic clean-up like this, please let me know.

Anyone? Anyone? Bueller?

In other Service Worker news, I’ve added a basic Service Worker to The Session. It caches static caches—CSS and JavaScript—and keeps another cache of site section index pages topped up. If the network connection drops (or the server goes down), there’s an offline page that gives a few basic options. Nothing too advanced, but better than nothing.

Update: Brandon has been tackling this problem and it looks like he’s found the solution: use the page load event to fire a postMessage payload to the active Service Worker:

window.addEventListener('load', function() {
    if (navigator.serviceWorker.controller) {
        navigator.serviceWorker.controller.postMessage({'command': 'trimCaches'});
    }
});

Then inside the Service Worker, I can listen for that message event and run my cache-trimming function:

self.addEventListener('message', function(event) {
    if (event.data.command == 'trimCaches') {
        trimCache(pagesCacheName, 35);
        trimCache(imagesCacheName, 20);
    }
});

So what happens is you visit a page, and the caching happens as usual. But then, once the page and all its assets are loaded, a message is fired off and the caches get trimmed.

I’ve updated my Service Worker and it looks like it’s working a treat.

Cache-limiting in Service Workers

When I was documenting my first Service Worker I mentioned that every time a user requests a page, I store that page in a cache for later (offline) use:

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

Well I’ve done that now. Here’s the updated Service Worker code.

I’ve got a function in there called stashInCache that takes a few arguments: which cache to use, the maximum number of items that should be in there, the request (URL), and the response:

var stashInCache = function(cacheName, maxItems, request, response) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length < maxItems) {
                        cache.put(request, response);
                    } else {
                        cache.delete(keys[0])
                            .then(function() {
                                cache.put(request, response);
                            });
                    }
                })
        });
};

It looks to see if the current number of items in the cache is less than the specified maximum:

if (keys.length < maxItems)

If so, go ahead and cache the item:

cache.put(request, response);

Otherwise, delete the first item from the cache and then put the item in the cache:

cache.delete(keys[0])
  .then(function() {
    cache.put(request, response);
  });

For HTML requests, I limit the cache to 35 items:

var copy = response.clone();
var cacheName = version + pagesCacheName;
var maxItems = 35;
stashInCache(cacheName, maxItems, request, copy);
return response;

For images, I’m limiting the cache to 20 items:

var copy = response.clone();
var cacheName = version + imagesCacheName;
var maxItems = 20;
stashInCache(cacheName, maxItems, request, copy);
return response;

Here’s my updated Service Worker.

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works. If you can spot what I’m doing wrong, please let me know.

Home screen

Remy posted a screenshot to Twitter last week.

A screenshot of adactio.com on an Android device showing an Add To Home Screen prompt.

That “Add To Home Screen” dialogue is not something that Remy explicitly requested (though, of course, you can—and should—choose to add adactio.com to your home screen). That prompt appears in Chrome on Android as the result of a fairly simple algorithm based on a few factors:

  1. The website is served over HTTPS. My site is.
  2. The website has a manifest file. Here’s my JSON manifest file.
  3. The website has a Service Worker. Here’s my site’s Service Worker script (although a little birdie told me that the Service Worker script can be as basic as a blank file).
  4. The user visits the website a few times over the course of a few days.

I think that’s a reasonable set of circumstances. I particularly like that there is no way of forcing the prompt to appear.

There are some carrots in there: Want to have the user prompted to add your site to their home screen? Well, then you need to be serving on a secure connection, and you’d better get on board that Service Worker train.

Speaking of which, after I published a walkthrough of my first Service Worker, I got an email bemoaning the lack of browser support:

I was very much interested myself in this topic, until I checked on the “Can I use…” site the availability of this technology. In one word “limited”. Neither Safari nor IOS Safari support it, at least now, so I cannot use it for implementing mobile applications.

I don’t think this is the right way to think about Service Workers. You don’t build your site on top of a Service Worker—you add a Service Worker on top of your existing site. It has been explicitly designed that way: you can’t make it the bedrock of your site’s functionality; you can only add it as an enhancement.

I think that’s really, really smart. It means that you can start implementing Service Workers today and as more and more browsers add support, your site will appear to get better and better. My site worked fine for fifteen years before I added a Service Worker, and on the day I added that Service Worker, it had no ill effect on non-supporting browsers.

Oh, and according to the Webkit five year plan, Service Worker support is on its way. This doesn’t surprise me. I can’t imagine that Apple would let Google upstage them for too long with that nice “add to home screen” flow.

Alas, Mobile Safari’s glacial update cycle means that the earliest we’ll see improvements like Service Workers will probably be September or October of next year. In the age of evergreen browsers, Apple’s feast-or-famine approach to releasing updates is practically indistinguishable from stagnation.

Still, slowly but surely, game-changing technologies are landing in browsers. At the same time, the long-term problems with betting on native apps are starting to become clearer. Native apps are still ahead of what can be accomplished on the web, but it was ever thus:

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

It’s interesting to see how the web could take the desirable features of native—offline support, smooth animations, an icon on the home screen—without sacrificing the strengths of the web—linking, responsiveness, the lack of App Store gatekeepers. That kind of future is what Alex is calling progressive apps:

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Flipkart recently launched something along those lines, although it’s somewhat lacking in the “enhancement” department; the core content is delivered via JavaScript—a fragile approach.

What excites me is the prospect of building services that work just fine on low-powered devices with basic browsers, but that also take advantage of all the great possibilities offered by the latest browsers running on the newest devices. Backwards compatible and future friendly.

And if that sounds like a naïve hope, then I humbly suggest that Service Workers are a textbook example of exactly that approach.

My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.

register

From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'
  });
}

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.

install

I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {
  event.waitUntil(updateStaticCache());
});

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
};

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      cache.addAll([
        '/path/to/somefile',
        '/path/to/someotherfile'
      ]);
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
}

Another option is to use completely different caches, but I’ve decided to just use one cache for now.

activate

When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
  event.waitUntil(
    caches.keys()
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          })
          .map(function (key) {
            return caches.delete(key);
          })
        );
      })
  );
});

fetch

The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  ...
});

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
  );
  return;
}

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
          .catch(function () {
              return caches.match('/offline');
          })
  );
  return;
}

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:

event.respondWith(
  fetch(request)
);

If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);
})

So now the whole event.respondWith statement looks like this:

event.respondWith(
  fetch(request)
    .catch(function () {
      return caches.match(request)
    })
);

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');
  })

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');
  })

With that cleared up, my code for handing HTML requests looks like this:

event.respondWith(
  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');
        })
    })
);

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);
  });

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  event.respondWith(
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
          });
        return response;
      })
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');
          })
      })
  );
  return;
}

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:

event.respondWith(
  caches.match(request)
);

Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request);
    })
);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});
  }
})

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          }
        })
    })
);

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
    event.respondWith(
      ... 
    );
    return;
  }
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
      ...
    );
    return;
  }
  // Non-HTML requests
  event.respondWith(
    ...
  );
});

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:

fetch(request);

…is functionally equivalent to this:

fetch(request)
  .then(function (response) {
    return response;
  });

That’s probably obvious but it took me a while to realise. Likewise:

caches.match(request);

…is the same as:

caches.match(request)
  .then(function (response) {
    return response;
  });

Here’s another thing… you’ll notice that sometimes I’ve used:

fetch(request);

…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Updates

I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"
</FilesMatch>
Credentials

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:

fetch(request);
Scope

I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'
  });
}

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.

Storyforming

It was only last week that myself and Ellen were brainstorming ideas for a combined workshop. Our enthusiasm got the better of us, and we said “Let’s just do it!” Before we could think better of it, the room was booked, and the calendar invitations were sent.

Workshopping

The topic was “story.”

No wait, maybe it was …”narrative.”

That’s not quite right either.

“Content,” perhaps?

Basically, here’s the issue: at some point everyone at Clearleft needs to communicate something by telling a story. It might be a blog post. It might be a conference talk. It might be a proposal for a potential client. It might be a case study of our work. Whatever form it might take, it involves getting a message across …using words. Words are hard. We wanted to make them a little bit easier.

We did two workshops. Ellen’s was yesterday. Mine was today. They were both just about two hours in length.

Get out of my head!

Ellen’s workshop was all about getting thoughts out of your head and onto paper. But before we could even start to do that, we had to confront our first adversary: the inner critic.

You know the inner critic. It’s that voice inside you that says “You’ve got nothing new to say”, or “You’re rubbish at writing.” Ellen encouraged each of us to drag this inner critic out into the light—much like Paul Ford did with his AnxietyBox.

Each of us drew a cartoon of our inner critic, complete with speech bubbles of things our inner critic says to us.

Drawing our inner critic inner critics

In a bizarre coincidence, Chloe and I had exactly the same inner critic, complete with top hat and monocle.

With that foe vanquished, we proceeded with a mind map. The idea was to just dump everything out of our heads and onto paper, without worrying about what order to arrange things in.

I found it to be an immensely valuable exercise. Whenever I’ve tried to do this before, I’d open up a blank text file and start jotting stuff down. But because of the linear nature of a text file, there’s still going to be an order to what gets jotted down; without meaning to, I’ve imposed some kind of priority onto the still-unformed thoughts. Using a mind map allowed me get everything down first, and then form the connections later.

mind mapping

There were plenty of other exercises, but the other one that really struck me was a simple framework of five questions. Whatever it is that you’re trying to say, write down the answers to these questions about your audience:

  1. What are they sceptical about?
  2. What problems do they have?
  3. What’s different now that you’ve communicated your message?
  4. Paint a pretty picture of life for them now that you’ve done that.
  5. Finally, what do they need to do next?

They’re straightforward questions, but the answers can really help to make sure you’re covering everything you need to.

There were many more exercises, and by the end of the two hours, everyone had masses of raw material, albeit in an unstructured form. My workshop was supposed to help them take that content and give it some kind of shape.

The structure of stories

Ellen and I have been enjoying some great philosophical discussions about exactly what a story is, and how does it differ from a narrative structure, or a plot. I really love Ellen’s working definition: Narrative. In Space. Over Time.

This led me to think that there’s a lot that we can borrow from the world of storytelling—films, novels, fairy tales—not necessarily about the stories themselves, but the kind of narrative structures we could use to tell those stories. After all, the story itself is often the same one that’s been told time and time again—The Hero’s Journey, or some variation thereof.

So I was interested in separating the plot of a story from the narrative device used to tell the story.

To start with, I gave some examples of well-known stories with relatively straightforward plots:

  • Star Wars,
  • Little Red Riding Hood,
  • Your CV,
  • Jurassic Park, and
  • Ghostbusters.

I asked everyone to take a story (either from that list, or think of another one) and write the plot down on post-it notes, one plot point per post-it. Before long, the walls were covered with post-its detailing the plot lines of:

  • Robocop,
  • Toy Story,
  • Back To The Future,
  • Elf,
  • E.T.,
  • The Three Little Pigs, and
  • Pretty Woman.

Okay. That’s plot. Next we looked at narrative frameworks.

Narrative frameworks as Oblique Strategies.

Flashback

Begin at a crucial moment, then back up to explain how you ended up there.

e.g. Citizen Kane “Rosebud!”

Dialogue

Instead of describing the action directly, have characters tell it to one another.

e.g. The Dialogues of Plato …or The Breakfast Club (or one of my favourite sci-fi short stories).

In Media Res

Begin in the middle of the action. No exposition allowed, but you can drop hints.

e.g. Mad Max: Fury Road (or Star Wars, if it didn’t have the opening crawl).

Backstory

Begin with a looooong zooooom into the past before taking up the story today.

e.g. 2001: A Space Odyssey.

Distancing Effect

Just the facts with no embellishment.

e.g. A police report.

You get the idea.

In a random draw, everyone received a card with a narrative device on it. Now they had to retell the story they chose using that framing. That led to some great results:

  • Toy Story, retold as a conversation between Andy and his psychiatrist (dialogue),
  • E.T., retold as a missing person’s report on an alien planet (distancing effect),
  • Elf, retold with an introduction about the very first Christmas (backstory),
  • Robocop, retold with Murphy already a cyborg, remembering his past (flashback),
  • The Three Little Pigs, retold with the wolf already at the door and no explanation as to why there’s straw everywhere (in media res).

Once everyone had the hang of it, I asked them to revisit their mind maps and other materials from the previous day’s workshop. Next, they arranged the “chunks” of that story into a linear narrative …but without worrying about getting it right—it’s not going to stay linear for long. Then, everyone is once again given a narrative structure. Now try rearranging and restructuring your story to use that framework. If something valuable comes out of that, great! If not, well, it’s still a fun creative exercise.

And that was pretty much it. I had no idea what I was doing, but it didn’t matter. It wasn’t really about me. It was about helping others take their existing material and play with it.

That said, I’m glad I finally got this process out into the world in some kind of semi-formalised way. I’ve been preparing talks and articles using these narrative exercises for a while, but this workshop was just the motivation I needed to put some structure on the process.

I think I might try to create a proper deck of cards—along the lines of Brian Eno’s Oblique Strategies or Stephen Andersons’s Mental Notes—so that everyone has the option of injecting a random narrative structural idea into the mix whenever they’re stuck.

At the very least, it would be a distraction from listening to that pesky inner critic.

Ice cold in Copenhagen

I went to Copenhagen last week for the Coldfront conference. It was lovely to be back in Denmark’s capital. I used to go over there ever year when the Reboot conference was running, but that wrapped up a few year’s back so it’s been quite a while since I had the opportunity to savour Copenhagen’s architecture, culture, coffee, food, and beer.

Coldfront was fun. Kenneth has modelled the format of the event on Remy’s Full Frontal conference—one day of a single track of front-end dev talks in a comfy cinema.

Going to a focused conference like this is a great way of getting a short sharp shock of what’s hot—like a State of the Union address for the web. At Coldfront there were some very clear themes around building for resilience, and specifically routing around the damage of inconsistent connectivity. There was a very clear message—from Paul, Alex, and Patrick (blog imminent)—that the network is not always on our side. Making our sites work offline should be much more of a priority than it currently is.

On a related note, the technology that was mentioned the most was Service Workers …and Jake wasn’t even there! Heck, even I mentioned it in glowing terms in my own little presentation. I was admiring the way it has been designed specifically to be used in a progressive enhancement kind of way.

So if I were Mr. McGuire in The Graduate, my line to a web developer equivalent of Dustin Hoffman would be “I want to say one word to you, just one word. Are you listening? …Service Workers.”

100 words 069

I made mention already of an exercise that myself and Charlotte came up with to help developers to think in terms of granular components: using a humble pair of scissors to cut up screenshots or mockups into their constituent parts.

Recently we repeated and added to this exercise. Once the groups of components are gathered together—buttons, form elements, icons, whatever—we go through each group. Everyone writes on a post-it what name they would give this component—button, formField, icon, etc. Then everyone slaps down their post-it notes at the same time. See any overlap? That’s your class name.

100 words 067

There’s something to be said for focusing on one particular kind of work. There’s a mastery that only comes with repeated practice.

On the other hand, I have to tendency to get bored of doing the same thing. Repetition is good for skill-building but it has the downside of being …repetitive.

Today Charlotte and I took part in a client workshop, getting right down to the nitty-gritty of cataloging interface elements at a granular level. Then I published a long rambling post on the nature of the web.

From sea level to the stratosphere—it’s good to mix things up.

100 words 057

It’s UX London week. That’s always a crazy busy time at Clearleft. But it’s also an opportunity. We have this sneaky tactic of kidnapping a speaker from UX London and making them give a workshop just for us. We did it a few years ago with Dave Grey and we got a fantastic few days of sketching out of it.

This time we grabbed Jeff Patton. He spent this afternoon locked in the auditorium at 68 Middle Street teaching us all about user story mapping. ‘Twas most enlightening and really helped validate some of the stuff we’ve been doing lately.