Tags: work

70

sparkline

Code (p)reviews

I’m not a big fan of job titles. I’ve always had trouble defining what I do as a noun—I much prefer verbs (“I make websites” sounds fine, but “website maker” sounds kind of weird).

Mind you, the real issue is not finding the right words to describe what I do, but rather figuring out just what the heck it is that I actually do in the first place.

According to the Clearleft website, I’m a technical director. That doesn’t really say anything about what I do. To be honest, I tend to describe my work these days in terms of what I don’t do: I don’t tend to write a lot of HTML, CSS, and JavaScript on client projects (although I keep my hand in with internal projects, and of course, personal projects).

Instead, I try to make sure that the people doing the actual coding—Mark, Graham, and Danielle—are happy and have everything they need to get on with their work. From outside, it might look like my role is managerial, but I see it as the complete opposite. They’re not in service to me; I’m in service to them. If they’re not happy, I’m not doing my job.

There’s another aspect to this role of technical director, and it’s similar to the role of a creative director. Just as a creative director is responsible for the overall direction and quality of designs being produced, I have an oversight over the quality of front-end output. I don’t want to be a bottleneck in the process though, and to be honest, most of the time I don’t do much checking on the details of what’s being produced because I completely trust Mark, Graham, and Danielle to produce top quality code.

But I feel I should be doing more. Again, it’s not that I want to be a bottleneck where everything needs my approval before it gets delivered, but I hope that I could help improve everyone’s output.

Now the obvious way to do this is with code reviews. I do it a bit, but not nearly as much as I should. And even when I do, I always feel it’s a bit late to be spotting any issues. After all, the code has already been written. Also, who am I to try to review the code produced by people who are demonstrably better at coding than I am?

Instead I think it will be more useful for me to stick my oar in before a line of code has been written; to sit down with someone and talk through how they’re going to approach solving a particular problem, creating a particular pattern, or implementing a particular user story.

I suppose it’s really not that different to rubber ducking. Having someone to talk out loud with about potential solutions can be really valuable in my experience.

So I’m going to start doing more code previews. I think it will also incentivise me to do more code reviews—being involved in the initial discussion of a solution means I’m going to want to see the final result.

But I don’t think this should just apply to front-end code. I’d also like to exercise this role as technical director with the designers on a project.

All too often, decisions are made in the design phase that prove problematic in development. It usually works out okay, but it often means revisiting the designs in light of some technical considerations. I’d like to catch those issues sooner. That means sticking my nose in much earlier in the process, talking through what the designers are planning to do, and keeping an eye out for any potential issues.

So, as technical director, I won’t be giving feedback like “the colour’s not working for me” or “not sure about those type choices” (I’ll leave that to the creative director), but instead I can ask questions like “how will this work without hover?” or “what happens when the user does this?” as well as pointing out solutions that might be tricky or time-consuming to implement from a technical perspective.

What I want to avoid is the swoop’n’poop, when someone seagulls in after something has been designed or built and points out all the problems. The earlier in the process any potential issues can be spotted, the better.

And I think that’s my job.

Progressive Web App questions

I got a nice email recently from Colin van Eenige. He wrote:

For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.

I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?

I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…

1. https://resilientwebdesign.com/ is an offline web book (progressive web app). What was the primary reason make it available like this (besides the other formats)?

Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.

Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.

As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.

But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.

2. What effect you you think Progressive Web Apps will have on content consuming and do you think these will take over the purpose of some Native Apps?

The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.

If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.

It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.

The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.

Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.

I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.

After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)

3. What is your vision regarding Progressive Web Apps?

For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.

So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.

I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”

If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.

I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.

4. What is the biggest learning you’ve had since working on Progressive Web Apps?

Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.

My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.

So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.

Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.

Small steps

The new Clearleft website is live! Huzzah!

Many people have been working very hard on it and it’s all looking rather nice. But, as I said before, the site launch isn’t the end—it’s just the beginning.

There are some obvious next steps: fixing bugs, adding content, tweaking copy, and, oh yeah, that whole “testing with real users” thing. But there’s also an opportunity to have some fun on the front end. Now that the site is out there in the wild, there’s a real incentive to improve its performance.

Off the top of my head, these are some areas where I think we can play around:

  • Font loading. Right now the site is just using @font-face. A smart font-loading strategy—at least for the body copy—could really help improve the perceived performance.
  • Responsive images. A long-term solution will require some wrangling on the back end, but I reckon we can come up with some way of generating different sized images to reference in srcset.
  • Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step. The question is: what other offline shenanigans could we get up to?

I’m looking forward to tinkering with some of those technologies. Each one should make an incremental improvement to the site’s performance. There are already some steps on the back-end that are making a big difference: upgrading to PHP7 and using HTTP2.

Now the real fun begins.

Charlotte

Over the eleven-year (and counting) lifespan of Clearleft, people have come and gone—great people like Nat, Andy, Paul and many more. It’s always a bittersweet feeling. On the one hand, I know I’ll miss having them around, but on the other hand, I totally get why they’d want to try their hand at something different.

It was Charlotte’s last day at Clearleft last Friday. Her husband Tom is being relocated to work in Sydney, which is quite the exciting opportunity for both of them. Charlotte’s already set up with a job at Atlassian—they’re very lucky to have her.

So once again there’s the excitement of seeing someone set out on a new adventure. But this one feels particularly bittersweet to me. Charlotte wasn’t just a co-worker. For a while there, I was her teacher …or coach …or mentor …I’m not really sure what to call it. I wrote about the first year of learning and how it wasn’t just a learning experience for Charlotte, it was very much a learning experience for me.

For the last year though, there’s been less and less of that direct transfer of skills and knowledge. Charlotte is definitely not a “junior” developer any more (whatever that means), which is really good but it’s left a bit of a gap for me when it comes to finding fulfilment.

Just last week I was checking in with Charlotte at the end of a long day she had spent tirelessly working on the new Clearleft site. Mostly I was making sure that she was going to go home and not stay late (something that had happened the week before which I wanted to nip in the bud—that’s not how we do things ‘round here). She was working on a particularly gnarly cross-browser issue and I ended up sitting with her, trying to help work through it. At the end, I remember thinking “I’ve missed this.”

It hasn’t been all about HTML, CSS, and JavaScript. Charlotte really pushed herself to become a public speaker. I did everything I could to support that—offering advice, giving feedback and encouragement—but in the end, it was all down to her.

I can’t describe the immense swell of pride I felt when Charlotte spoke on stage. Watching her deliver her talk at Dot York was one my highlights of the year.

Thinking about it, this is probably the perfect time for Charlotte to leave the Clearleft nest. After all, I’m not sure there’s anything more I can teach her. But this feels like a particularly sad parting, maybe because she’s going all the way to Australia and not, y’know, starting a new job in London.

In our final one-to-one, my stiff upper lip may have had a slight wobble as I told Charlotte what I thought was her greatest strength. It wasn’t her work ethic (which is incredibly strong), and it wasn’t her CSS skills (‘though she is now an absolute wizard). No, her greatest strength, in my opinion, is her kindness.

I saw her kindness in how she behaved with her colleagues, her peers, and of course in all the fantastic work she’s done at Codebar Brighton.

I’m going to miss her.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Less JavaScript

Every front-end developer at Clearleft went to FFConf last Friday: me, Mark, Graham, Charlotte, and Danielle. We weren’t about to pass up the opportunity to attend a world-class dev conference right here in our home base of Brighton.

The day was unsurprisingly excellent. All the speakers brought their A-game on a wide range of topics. Of course JavaScript was covered, but there was also plenty of mindfood on CSS, accessibility, progressive enhancement, dev tools, creative coding, and even emoji.

Normally FFConf would be a good opportunity to catch up with some Pauls from the Google devrel team, but because of an unfortunate scheduling clash this year, all the Pauls were at Chrome Dev Summit 2016 on the other side of the Atlantic.

I’ve been catching up on the videos from the event. There’s plenty of tech-related stuff: dev tools, web components, and plenty of talk about progressive web apps. But there was also a very, very heavy focus on performance. I don’t just mean performance at the shallow scale of file size and optimisation, but a genuine questioning of the impact of our developer workflows and tools.

In his talk on service workers (what else?), Jake makes the point that not everything needs to be a single page app, echoing Ada’s talk at FFConf.

He makes the point that if you really want fast rendering, nothing on the client side quite beats a server render.

They’ve written a lot of JavaScript to make this quite slow.

Unfortunately, all too often, I hear people say that a progressive web app must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There’s a lot of cargo-culting around single page apps.

Alex followed up his barnstorming talk from the Polymer Summit with some more uncomfortable truths about how mobile phones work.

Cell networks are basically kryptonite to the protocols and assumptions that the web was built on.

And JavaScript frameworks aren’t helping. Quite the opposite.

But make no mistake: if you’re using one of today’s more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this.

Today’s frameworks are mostly a sign of ignorance, or privilege, or both. The good news is that we can fix the ignorance.

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Assumptions

Last year Benedict Evans wrote about the worldwide proliferation and growth of smartphones. Nolan referenced that post when he extrapolated the kind of experience people will be having:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

This is the same argument that Tom made in his presentation at Responsive Field Day. The main point is that network conditions are unreliable, and I absolutely agree that we need to be very, very mindful of that. But I’m not so sure about the other conditions either. They smell like assumptions:

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

It’s not necessarily true that all those new web users will be running WebView browser like Chrome—there are millions of Opera Mini users, and I would expect that number to rise, given all the speed and cost benefits that proxy browsing brings.

I also don’t think that just because a device is a smartphone it necessarily means that it’s a pocket supercomputer. It might seem like a reasonable assumption to make, given the specs of even a low-end smartphone, but the specs don’t tell the whole story.

Alex gave a great presentation at the recent Polymer Summit. He dives deep into exactly how smartphones at the lower end of the market deal with websites.

I don’t normally enjoy listening to talk of hardware and specs, but Alex makes the topic very compelling by tying it directly to how we build websites. In short, we’re using waaaaay too much JavaScript. The message here is not “don’t use JavaScript” but rather “use JavaScript wisely.” Alas, many of the current crop of monolithic frameworks aren’t well suited to this.

Alex’s talk prompted Michael Scharnagl to take a look back at past assumptions and lessons learned on the web, from responsive design to progressive web apps.

We are consistently improving and we often have to realize that our assumptions are wrong.

This is particularly true when we’re making assumptions about how people will access the web.

It’s not enough to talk about the “next billion” in abstract, like an opportunity to reach teeming masses of people ripe for monetization. We need to understand their lives and their priorities with the sort of detail that can build empathy for other people living under vastly different circumstances.

That’s from an article Ethan linked to, noting:

The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">
</iframe>

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/serviceworker.js');
}
</script>
</head>
</html>

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
    
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.
    

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

The web on my phone

It’s funny how times have changed. Remember back in the 90s when Microsoft—quite rightly—lost an anti-trust case? They were accused of abusing their monopolistic position in the OS world to get an unfair advantage in the browser world. By bundling a copy of Internet Explorer with every copy of Windows, they were able to crush the competition from Netscape.

Mind you, it was still possible to install a Netscape browser on a Windows machine. Could you imagine if Microsoft had tried to make that impossible? There would’ve been hell to pay! They wouldn’t have had a legal leg to stand on.

Yet here we are two decades later and that’s exactly what an Operating System vendor is doing. The Operating System is iOS. It’s impossible to install a non-Apple browser onto an Apple mobile computer. For some reason, the fact that it’s a mobile device (iPhone, iPad) makes it different from a desktop-bound device running OS X. Very odd considering they’re all computers.

“But”, I hear you say, “What about Chrome for iOS? Firefox for iOS? Opera for iOS?”

Chrome for iOS is not Chrome. Firefox for iOS is not Firefox. Opera for iOS is not Opera. They are all using WebKit. They’re effectively the same as Mobile Safari, just with different skins.

But there won’t be any anti-trust case here.

I think it’s a real shame. Partly, I think it’s a shame because as a developer, I see an Operating System being let down by its browser. But mostly, I think it’s a shame because I use an iPhone and I’m being let down by its browser.

It’s kind of ironic, because when the iPhone first launched, it was all about the web apps. Remember, there was no App Store for the first year of the iPhone’s life. If you wanted to build an app, you had to use web technologies. Apple were ahead of their time. Alas, the web technologies weren’t quite up to the task back in 2007. These days, though, there are web technologies landing in browsers that are truly game-changing.

In case you hadn’t noticed, I’m very excited about Service Workers. It’s doubly exciting to see the efforts the Chrome on Android team are making to make the web a first-class citizen. As Remy put it:

If I add this app to my home screen, it will work when I open it.

I’d like to be able to use Chrome, Firefox, or Opera on my iPhone—real Chrome, real Firefox, or real Opera; not a skinned version of Safari. Right now the only way for me to switch browsers is to switch phones. Switching phones is a pain in the ass, but I’m genuinely considering it.

Whereas I’m all talk, Henrik has taken action. Like me, he doesn’t actually care about the Operating System. He cares about the browser:

Android itself bores me, honestly. There’s nothing all that terribly new or exciting here.’

save one very important detail…

IT’S CURRENTLY THE BEST MOBILE WEB APP PLATFORM

That’s true for now. The pole position for which browser is “best” is bound to change over time. The point is that locking me into one particular browser on my phone doesn’t sit right with me. It’s not very …webby.

I’m sure that Apple are not quaking in their boots at the thought of myself or Henrik switching phones. We are minuscule canaries in a very niche mine.

But what should give Apple pause for thought is the user experience they can offer for using the web. If they gain a reputation for providing a sub-par web experience compared to the competition, then maybe they’ll have to make the web a first-class citizen.

If I want to work towards that, switching phones probably won’t help. But what might help is following Alex’s advice in his answer to the question “What do we do about Safari?”:

What we do about Safari is we make websites amazing …and then they can’t not implement.

I’ll be doing that here on adactio, over on The Session (and Huffduffer when I get around to overhauling it), making progressively enhanced, accessible, offline-first, performant websites.

I’ll also be doing it at Clearleft. If you work at an organisation that wants a progressively-enhanced, accessible, offline-first, performant website, we should talk.

Design sprinting

James and I went to Ipswich last week for work. But this wasn’t part of an ongoing project—this was a short intense one-week feasibility study.

Leon from Suffolk Libraries got in touch with us about a project they’re planning to carry out soon: replacing their self-service machines with something more up-to-date. But rather than dive into commissioning the project straight away, he wisely decided to start with a one-week sprint to figure out exactly what the project would need to go ahead.

So that’s what James and I did. It was somewhat similar to the design sprint popularised by GV. We ensconced ourselves in the Ipswich library and packed a whole lot of work into five days. There was lots of collaboration, lots of sketching, lots of iterative design, and some rough’n’ready code. It was challenging, but a lot of fun. Also: we stayed in a pretty sweet AirBnB.

Our home for the week. This is a nice AirBnB.

You can read all about it in our case study. You can also read all about from Leon’s point of view on his blog:

I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.

I think this approach makes a lot of sense. By the end of the week, James and I felt pretty confident about estimating times and costs for the full project. Normally trying to estimate that kind of thing can be a real guessing game. But with the small of investment of one week’s worth of effort, you get a whole lot more certainty and confidence.

Have a look for yourself.

Handling redirects with a Service Worker

When I wrote about implementing my first Service Worker, I finished with this plea:

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Well, I ran into a gotcha that was really frustrating but thanks to the generosity of others, I was able to sort it out.

It was all because of an issue in Chrome. Here’s the problem…

Let’s say you’ve got a Service Worker running that takes care of any requests to your site. Now on that site, you’ve got a URL that receives POST data, does something with it, and then redirects to another URL. That’s a fairly common situation—it’s how I handle webmentions here on adactio.com, and it’s how I handle most add/edit/delete actions over on The Session to help prevent duplicate form submissions.

Anyway, it turns out that Chrome’s Service Worker implementation would get confused by that. Instead of redirecting, it showed the offline page instead. The fetch wasn’t resolving.

I described the situation to Jake, but rather than just try and explain it in 140 characters, I built a test case.

There’s a Chromium issue filed on this, and it will get fixed, but it in the meantime, it was really bugging me recently when I was rolling out a new feature on The Session. Matthew pointed out that the Chromium bug report also contained a workaround that he’s been using on traintimes.org.uk. Adrian also posted his expanded workaround in there too. That turned out to be exactly what I needed.

I think the problem is that the redirect means that a body is included in the GET request, which is what’s throwing the Service Worker. So I need to create a duplicate request without the body:

request = new Request(url, {
    method: 'GET',
    headers: request.headers,
    mode: request.mode == 'navigate' ? 'cors' : request.mode,
    credentials: request.credentials,
    redirect: request.redirect
});

So here’s what I had in my Service Worker before:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

And here’s what I have now:

// For HTML requests, try the network first, fall back to the cache, finally the offline page
if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    request = new Request(url, {
        method: 'GET',
        headers: request.headers,
        mode: request.mode == 'navigate' ? 'cors' : request.mode,
        credentials: request.credentials,
        redirect: request.redirect
    });
    event.respondWith(
        fetch(request)
            .then( response => {
                // NETWORK
                // Stash a copy of this page in the pages cache
                let copy = response.clone();
                stashInCache(pagesCacheName, request, copy);
                return response;
            })
            .catch( () => {
                // CACHE or FALLBACK
                return caches.match(request)
                    .then( response => response || caches.match('/offline') );
                })
        );
    return;
}

Now the test case is working just fine in Chrome.

On the off-chance that someone out there is struggling with the same issue, I hope that this is useful.

Share what you learn.

Service Worker notes

Here’s a disconnected hodge-podge of things related to Service Workers I’ve noticed recently…

Service Workers have landed in Firefox. When it was first unveiled in a nightly build a few people spotted some weird character issues on sites like mine that are using Service Workers, but that should all be fixed in the next release.

A while back I voted up Service Workers on Microsoft’s roadmap for Edge. I got an email today to say that the roadmap priority is high:

We intend to begin development soon.

We’re getting there.

Here’s a little gotcha that had me tearing my hair out until I tracked down the culprit: don’t use Header append Vary User-Agent in your site’s Apache config file. I don’t know why it snuck in there in the first place, but once I removed it, it fixed a weird issue that Aaron T. Grogg pointed out to me whereby my offline page would get cached, but not my CSS.

I really like this proposal for:

<link rel="serviceworker" href="/serviceworker.js">

It makes sense to me that I should be able to point to the Service Worker of a page in the same way that I point to a style sheet. It makes sense as a rel value too: “the linked resource has the relationship of ‘serviceworker’ to the current document.”

Also, I’m just generally a fan of declarative solutions. This feels like another good example of functionality that starts life in an imperative language (JavaScript) and then becomes declarative over time (see also: :hover, the required attribute, etc.).

Lyza wrote a fantastic article on Smashing Magazine that goes into all the details of her Service Worker. I must admit to feeling extremely gratified when she wrote:

First, I’m hugely indebted to Jeremy Keith for the implementation of service workers on his own website, which served as the starting point for my own code.

Going through her code, she made this remark:

Note: I use certain ECMAScript6 (or ES2015) features in the sample code for service workers because browsers that support service workers also support these features.

That’s a really good point. I haven’t messed around much with ES6 HipsterScript stuff partly because I haven’t yet set up a transpiler, so it was nice to know that my Service Worker is a “safe space” to try some stuff out in the browser. I refactored my JavaScript to use const, let, and =>.

Jake is looking for feedback on a specific part of Service Worker functionality around URLs. If I can wrap my head around what’s being described, I’ll chime in.

Finally, I had a nice little Service Worker moment earlier today. I was doing some updates on my web server that required a reboot. When I checked in Chrome to see how long adactio.com was down, I was surprised to see that the downtime appeared to be …zero. “That’s odd” I thought, “How can my site still be reachable if the server is …oh!” That’s when I realised I was seeing a cached version of my homepage. My Service Worker was doing it’s thing.

I had been thinking of Service Workers as a tool to help in situations where the user agent goes offline. But of course it’s an equally useful tool for when the server goes offline. This was a nice reminder of that.

Shadows and smoke

When I wrote about a year of learning with Charlotte, I made an off-hand remark in parentheses:

Hiring Charlotte was an experiment for Clearleft—could we hire someone in a “junior” position, and then devote enough time and resources to bring them up to a “senior” level? (those quotes are air quotes—I find the practice of labelling people or positions “junior” or “senior” to be laughably reductionist; you might as well try to divide the entire web into “apps” and “sites”).

It breaks my heart to see so many of my colleagues prefix their job titles “senior” (not least because it becomes completely meaningless when every single Visual Designer is also a “Senior Visual Designer”).

I remember being at a conference after-party a few years ago chatting to a very talented front-end developer. She wasn’t happy with where she was working. I advised to get a job somewhere else After all, she lived and worked in San Francisco, where her talents are in high demand. But she was hesitant.

“They’ve promised me that in a few more months, my job title would become ‘Senior Developer’”, she said. “Ah, right,” I said, “and what happens then?” “Well”, she said, “I get to have the word ‘senior’ on my resumé.” That was it. No pay rise. No change in responsibilities. Just a word on a piece of paper.

I had always been suspicious of job titles, but that exchange put me over the edge. Job titles can be downright harmful.

Dan recently wrote about the importance of job titles. I love Dan, but I couldn’t disagree with him more in this instance.

He cite two situations where he believes job titles have value:

Your title tells your colleagues how to interact with you.

No. Talking to your colleagues tells your colleagues how to interact you. Job titles attempt to short-cut that. They do a terrible job of it.

What you need to know are the verbs that your colleagues are adept in: designing, developing, thinking, communicating, facilitating …all of that gets squashed down into one reductionist noun like “Copywriter” or “Designer”.

At Clearleft, we’ve recently started kicking off projects with an exercise called “Fuzzy Edges” that Boxman has been refining. In it, we look ahead to all the upcoming project roles (e.g. “Who will lead playbacks and demos?”, “Who will run stakeholder interviews?”, “Who will lead design direction?”). Together, everyone on the project comes to a consensus on who has which roles.

It’s really, really important to clarify these roles at the start of each project, and it’s exactly the kind of thing that can’t be summed up in a job title. In fact, the existence of job titles can lead to harmful assumptions like “Oh, I figured you were leading playbacks and demos!” or “Oh, I assumed they were running stakeholder interviews!”, or worse: “Hey, you can’t lead design direction because that’s not in your job title!”

The role assignments can vary hugely from project to project, which is great. People are varied and multi-faceted. Trying to force the same people into the same roles over and over again would be demoralising and counter-productive. I fear that’s exactly what job titles do—they reinforce barriers.

Here’s the second reason Dan gives for the value of job titles:

Your title tells your clients how to interact with you.

Again, no. Talking to your clients tells your clients how to interact with you.

Dan illustrates his point by recounting a tale of deception, demonstrating that a well-placed lie about someone’s job title can mollify the kind of people who place great stock in job titles. That’s not solving the real problem. Again, while job titles might appear to be shortcuts to a shared understanding, they’re actually more like façades covering up trapdoors.

In recounting the perceived value of job titles, there’s an assumption that the titles were arrived at fairly. If someone’s job title is “Senior Designer” and someone’s job title is “Junior Designer”, then the senior person must be the better, more experienced designer, right?

But that isn’t always the case. And that’s when job titles go from being silly pointless phrases to being downright damaging, causing real harm.

Over on Rands in Repose, there’s a great post called Titles are Toxic. His experience mirrors mine:

Never in my life have I ever stared at a fancy title and immediately understood the person’s value. It took time. I spent time with those people — we debated, we discussed, we disagreed — and only then did I decide: “This guy… he really knows his stuff. I have much to learn.” In Toxic Title Douchebag World, titles are designed to document the value of an individual sans proof. They are designed to create an unnecessary social hierarchy based on ego.

See? There’s no shortcut for talking to people. Job titles are an attempt to cut out one of the most important aspects of humans working together.

The unspoken agreement was that these titles were necessary to map to a dimwitted external reality where someone would look at a business card and apply an immediate judgement on ability based on title. It’s absurd when you think about it – the fact that I’d hand you a business card that read “VP” and you’d leap to the immediate assumption: “Since his title is VP, he must be important. I should be talking to him”. I understand this is how a lot of the world works, but it’s precisely this type of reasoning that makes titles toxic.

So it’s not even that I think that job titles are bad at what they’re trying to do …I think that what they’re trying to do is bad.

Cache-limiting in Service Workers …again

Okay, so remember when I was talking about cache-limiting in Service Workers?

It wasn’t quite working:

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works.

Sure enough, I was doing something wrong. Thanks to Brandon Rozek and Jonathon Lopes for talking me through the problem.

In a nutshell, I’m mixing up synchronous instructions (like “delete the first item from a cache”) with asynchronous events (pretty much anything to do with fetching and caching with Service Workers).

Instead of trying to clean up a cache at the same time as I’m adding a new item to it, it’s better for me to have clean-up function to run at a different time. So I’ve written that function:

var trimCache = function (cacheName, maxItems) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length > maxItems) {
                        cache.delete(keys[0])
                            .then(trimCache(cacheName, maxItems));
                    }
                });
        });
};

But now the question is …when should I run this function? What’s a good event to trigger a clean-up? I don’t think the activate event is going to work. I probably want something like background sync but I don’t think that’s quite ready for primetime yet.

In the meantime, if you can think of a good way of doing a periodic clean-up like this, please let me know.

Anyone? Anyone? Bueller?

In other Service Worker news, I’ve added a basic Service Worker to The Session. It caches static caches—CSS and JavaScript—and keeps another cache of site section index pages topped up. If the network connection drops (or the server goes down), there’s an offline page that gives a few basic options. Nothing too advanced, but better than nothing.

Update: Brandon has been tackling this problem and it looks like he’s found the solution: use the page load event to fire a postMessage payload to the active Service Worker:

window.addEventListener('load', function() {
    if (navigator.serviceWorker.controller) {
        navigator.serviceWorker.controller.postMessage({'command': 'trimCaches'});
    }
});

Then inside the Service Worker, I can listen for that message event and run my cache-trimming function:

self.addEventListener('message', function(event) {
    if (event.data.command == 'trimCaches') {
        trimCache(pagesCacheName, 35);
        trimCache(imagesCacheName, 20);
    }
});

So what happens is you visit a page, and the caching happens as usual. But then, once the page and all its assets are loaded, a message is fired off and the caches get trimmed.

I’ve updated my Service Worker and it looks like it’s working a treat.

Cache-limiting in Service Workers

When I was documenting my first Service Worker I mentioned that every time a user requests a page, I store that page in a cache for later (offline) use:

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

Well I’ve done that now. Here’s the updated Service Worker code.

I’ve got a function in there called stashInCache that takes a few arguments: which cache to use, the maximum number of items that should be in there, the request (URL), and the response:

var stashInCache = function(cacheName, maxItems, request, response) {
    caches.open(cacheName)
        .then(function (cache) {
            cache.keys()
                .then(function (keys) {
                    if (keys.length < maxItems) {
                        cache.put(request, response);
                    } else {
                        cache.delete(keys[0])
                            .then(function() {
                                cache.put(request, response);
                            });
                    }
                })
        });
};

It looks to see if the current number of items in the cache is less than the specified maximum:

if (keys.length < maxItems)

If so, go ahead and cache the item:

cache.put(request, response);

Otherwise, delete the first item from the cache and then put the item in the cache:

cache.delete(keys[0])
  .then(function() {
    cache.put(request, response);
  });

For HTML requests, I limit the cache to 35 items:

var copy = response.clone();
var cacheName = version + pagesCacheName;
var maxItems = 35;
stashInCache(cacheName, maxItems, request, copy);
return response;

For images, I’m limiting the cache to 20 items:

var copy = response.clone();
var cacheName = version + imagesCacheName;
var maxItems = 20;
stashInCache(cacheName, maxItems, request, copy);
return response;

Here’s my updated Service Worker.

The cache-limited seems to be working for pages. But for some reason the images cache has blown past its allotted maximum of 20 (you can see the items in the caches under the “Resources” tab in Chrome under “Cache Storage”).

This is almost certainly because I’m doing something wrong or have completely misunderstood how the caching works. If you can spot what I’m doing wrong, please let me know.

Home screen

Remy posted a screenshot to Twitter last week.

A screenshot of adactio.com on an Android device showing an Add To Home Screen prompt.

That “Add To Home Screen” dialogue is not something that Remy explicitly requested (though, of course, you can—and should—choose to add adactio.com to your home screen). That prompt appears in Chrome on Android as the result of a fairly simple algorithm based on a few factors:

  1. The website is served over HTTPS. My site is.
  2. The website has a manifest file. Here’s my JSON manifest file.
  3. The website has a Service Worker. Here’s my site’s Service Worker script (although a little birdie told me that the Service Worker script can be as basic as a blank file).
  4. The user visits the website a few times over the course of a few days.

I think that’s a reasonable set of circumstances. I particularly like that there is no way of forcing the prompt to appear.

There are some carrots in there: Want to have the user prompted to add your site to their home screen? Well, then you need to be serving on a secure connection, and you’d better get on board that Service Worker train.

Speaking of which, after I published a walkthrough of my first Service Worker, I got an email bemoaning the lack of browser support:

I was very much interested myself in this topic, until I checked on the “Can I use…” site the availability of this technology. In one word “limited”. Neither Safari nor IOS Safari support it, at least now, so I cannot use it for implementing mobile applications.

I don’t think this is the right way to think about Service Workers. You don’t build your site on top of a Service Worker—you add a Service Worker on top of your existing site. It has been explicitly designed that way: you can’t make it the bedrock of your site’s functionality; you can only add it as an enhancement.

I think that’s really, really smart. It means that you can start implementing Service Workers today and as more and more browsers add support, your site will appear to get better and better. My site worked fine for fifteen years before I added a Service Worker, and on the day I added that Service Worker, it had no ill effect on non-supporting browsers.

Oh, and according to the Webkit five year plan, Service Worker support is on its way. This doesn’t surprise me. I can’t imagine that Apple would let Google upstage them for too long with that nice “add to home screen” flow.

Alas, Mobile Safari’s glacial update cycle means that the earliest we’ll see improvements like Service Workers will probably be September or October of next year. In the age of evergreen browsers, Apple’s feast-or-famine approach to releasing updates is practically indistinguishable from stagnation.

Still, slowly but surely, game-changing technologies are landing in browsers. At the same time, the long-term problems with betting on native apps are starting to become clearer. Native apps are still ahead of what can be accomplished on the web, but it was ever thus:

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

It’s interesting to see how the web could take the desirable features of native—offline support, smooth animations, an icon on the home screen—without sacrificing the strengths of the web—linking, responsiveness, the lack of App Store gatekeepers. That kind of future is what Alex is calling progressive apps:

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Flipkart recently launched something along those lines, although it’s somewhat lacking in the “enhancement” department; the core content is delivered via JavaScript—a fragile approach.

What excites me is the prospect of building services that work just fine on low-powered devices with basic browsers, but that also take advantage of all the great possibilities offered by the latest browsers running on the newest devices. Backwards compatible and future friendly.

And if that sounds like a naïve hope, then I humbly suggest that Service Workers are a textbook example of exactly that approach.