Tags: ie

201

sparkline

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Patterns Day speakers

Ticket sales for Patterns Day are going quite, quite briskly. If you’d like to come along, but you don’t yet have a ticket, you might want to remedy that. Especially when you hear about who else is going to be speaking…

Sareh Heidari works at the BBC building websites for a global audience, in as many as twenty different languages. If you want to know about strategies for using CSS at scale, you definitely want to hear this talk. She just stepped off stage at the excellent CSSconf EU in Berlin, and I’m so happy that Sareh’s coming to Brighton!

Patterns Day isn’t the first conference about design systems and pattern libraries on the web. That honour goes to the Clarity conference, organised by the brilliant Jina Anne. I was gutted I couldn’t make it to Clarity last year. By all accounts, it was excellent. When I started to form the vague idea of putting on an event here in the UK, I immediately contacted Jina to make sure she was okay with it—I didn’t want to step on her toes. Not only was she okay with it, but she really wanted to come along to attend. Well, never mind attending, I said, how about speaking?

I couldn’t be happier that Jina agreed to speak. She has had such a huge impact on the world of pattern libraries through her work with the Lightning design system, Clarity, and the Design Systems Slack channel.

The line-up is now complete. Looking at the speakers, I find myself grinning from ear to ear—it’s going to be an honour to introduce each and every one of them.

This is going to be such an excellent day of fun and knowledge. I can’t wait for June 30th!

Code (p)reviews

I’m not a big fan of job titles. I’ve always had trouble defining what I do as a noun—I much prefer verbs (“I make websites” sounds fine, but “website maker” sounds kind of weird).

Mind you, the real issue is not finding the right words to describe what I do, but rather figuring out just what the heck it is that I actually do in the first place.

According to the Clearleft website, I’m a technical director. That doesn’t really say anything about what I do. To be honest, I tend to describe my work these days in terms of what I don’t do: I don’t tend to write a lot of HTML, CSS, and JavaScript on client projects (although I keep my hand in with internal projects, and of course, personal projects).

Instead, I try to make sure that the people doing the actual coding—Mark, Graham, and Danielle—are happy and have everything they need to get on with their work. From outside, it might look like my role is managerial, but I see it as the complete opposite. They’re not in service to me; I’m in service to them. If they’re not happy, I’m not doing my job.

There’s another aspect to this role of technical director, and it’s similar to the role of a creative director. Just as a creative director is responsible for the overall direction and quality of designs being produced, I have an oversight over the quality of front-end output. I don’t want to be a bottleneck in the process though, and to be honest, most of the time I don’t do much checking on the details of what’s being produced because I completely trust Mark, Graham, and Danielle to produce top quality code.

But I feel I should be doing more. Again, it’s not that I want to be a bottleneck where everything needs my approval before it gets delivered, but I hope that I could help improve everyone’s output.

Now the obvious way to do this is with code reviews. I do it a bit, but not nearly as much as I should. And even when I do, I always feel it’s a bit late to be spotting any issues. After all, the code has already been written. Also, who am I to try to review the code produced by people who are demonstrably better at coding than I am?

Instead I think it will be more useful for me to stick my oar in before a line of code has been written; to sit down with someone and talk through how they’re going to approach solving a particular problem, creating a particular pattern, or implementing a particular user story.

I suppose it’s really not that different to rubber ducking. Having someone to talk out loud with about potential solutions can be really valuable in my experience.

So I’m going to start doing more code previews. I think it will also incentivise me to do more code reviews—being involved in the initial discussion of a solution means I’m going to want to see the final result.

But I don’t think this should just apply to front-end code. I’d also like to exercise this role as technical director with the designers on a project.

All too often, decisions are made in the design phase that prove problematic in development. It usually works out okay, but it often means revisiting the designs in light of some technical considerations. I’d like to catch those issues sooner. That means sticking my nose in much earlier in the process, talking through what the designers are planning to do, and keeping an eye out for any potential issues.

So, as technical director, I won’t be giving feedback like “the colour’s not working for me” or “not sure about those type choices” (I’ll leave that to the creative director), but instead I can ask questions like “how will this work without hover?” or “what happens when the user does this?” as well as pointing out solutions that might be tricky or time-consuming to implement from a technical perspective.

What I want to avoid is the swoop’n’poop, when someone seagulls in after something has been designed or built and points out all the problems. The earlier in the process any potential issues can be spotted, the better.

And I think that’s my job.

Writing on the web

Some people have been putting Paul’s crazy idea into practice.

  • Mike revived his site a while back and he’s been posting gold dust ever since. I enjoy his no-holds-barred perspective on his time in San Francisco.
  • Garrett’s writing goes all the way back to 2005. The cumulative result is two fascinating interweaving narratives—one about his health, another about his business.
  • Charlotte has been documenting her move from Brighton to Sydney. Much as I love her articles about front-end development, I’m liking the slice-of-life updates on life down under even more.
  • Amber has a great way with words. As well as regularly writing on her blog, she’s two-thirds of the way through writing 100 words every day for 100 days.
  • Ethan has been writing about responsive design—of course—but it’s his more personal posts that make me really grateful for his site.
  • Jeffrey and Eric never stopped writing on their own sites. Sure, there’s good stuff on their about web design and development, but it’s the writing about their non-web lives that’s so powerful.

There are more people I could mention …but, to be honest, not that many more. Seems like most people are happy to only publish on Ev’s blog or not at all.

I know not everybody wants to write on the web, and that’s fine. But it makes me sad when people choose not to publish their thoughts because they think no-one will be interested, or that it’s all been said before. I understand where those worries come from, but I believe—no, I know—that they are unfounded.

It’s a world wide web out there. There’s plenty of room for everyone. And I, for one, love reading the words of others.

Progressive Web App questions

I got a nice email recently from Colin van Eenige. He wrote:

For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.

I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?

I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…

1. https://resilientwebdesign.com/ is an offline web book (progressive web app). What was the primary reason make it available like this (besides the other formats)?

Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.

Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.

As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.

But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.

2. What effect you you think Progressive Web Apps will have on content consuming and do you think these will take over the purpose of some Native Apps?

The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.

If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.

It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.

The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.

Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.

I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.

After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)

3. What is your vision regarding Progressive Web Apps?

For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.

So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.

I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”

If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.

I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.

4. What is the biggest learning you’ve had since working on Progressive Web Apps?

Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.

My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.

So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.

Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.

Audio book

I’ve recorded each chapter of Resilient Web Design as MP3 files that I’ve been releasing once a week. The final chapter is recorded and released so my audio work is done here.

If you want subscribe to the podcast, pop this RSS feed into your podcast software of choice. Or use one of these links:

Or if you can have it as one single MP3 file to listen to as an audio book. It’s two hours long.

So, for those keeping count, the book is now available as HTML, PDF, EPUB, MOBI, and MP3.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Contact

I left the office one evening a few weeks back, and while I was walking up the street, James Box cycled past, waving a hearty good evening to me. I didn’t see him at first. I was in a state of maximum distraction. For one thing, there was someone walking down the street with a magnificent Irish wolfhound. If that weren’t enough to dominate my brain, I also had headphones in my ears through which I was listening to an audio version of a TED talk by Donald Hoffman called Do we really see reality as it is?

It’s fascinating—if mind-bending—stuff. It sounds like the kind of thing that’s used to justify Deepak Chopra style adventures in la-la land, but Hoffman is deliberately taking a rigorous approach. He knows his claims are outrageous, but he welcomes all attempts to falsify his hypotheses.

I’m not noticing this just from a short TED talk. It’s been one of those strange examples of synchronicity where his work has been popping up on my radar multiple times. There’s an article in Quanta magazine that was also republished in The Atlantic. And there’s a really good interview on the You Are Not So Smart podcast that I huffduffed a while back.

But the most unexpected place that Hoffman popped up was when I was diving down a SETI (or METI) rabbit hole. There I was reading about the Cosmic Call project and Lincos when I came across this article: Why ‘Arrival’ Is Wrong About the Possibility of Talking with Space Aliens, with its subtitle “Human efforts to communicate with extraterrestrials are doomed to failure, expert says.” The expert in question pulling apart the numbers in the Drake equation turned out to be none other than Donald Hoffmann.

A few years ago, at a SETI Institute conference on interstellar communication, Hoffman appeared on the bill after a presentation by radio astronomer Frank Drake, who pioneered the search for alien civilizations in 1960. Drake showed the audience dozens of images that had been launched into space aboard NASA’s Voyager probes in the 1970s. Each picture was carefully chosen to be clearly and easily understood by other intelligent beings, he told the crowd.

After Drake spoke, Hoffman took the stage and “politely explained how every one of the images would be infinitely ambiguous to extraterrestrials,” he recalls.

I’m sure he’s quite right. But let’s face it, the Voyager golden record was never really about communicating with an alien intelligence …it was about how we present ourself.

Vertical limit

When I was first styling Resilient Web Design, I made heavy use of vh units. The vertical spacing between elements—headings, paragraphs, images—was all proportional to the overall viewport height. It looked great!

Then I tested it on real devices.

Here’s the problem: when a page loads up in a mobile browser—like, say, Chrome on an Android device—the URL bar is at the top of the screen. The height of that piece of the browser interface isn’t taken into account for the viewport height. That makes sense: the viewport height is the amount of screen real estate available for the content. The content doesn’t extend into the URL bar, therefore the height of the URL bar shouldn’t be part of the viewport height.

But then if you start scrolling down, the URL bar scrolls away off the top of the screen. So now it’s behaving as though it is part of the content rather than part of the browser interface. At this point, the value of the viewport height changes: now it’s the previous value plus the height of the URL bar that was previously there but which has now disappeared.

I totally understand why the URL bar is squirrelled away once the user starts scrolling—it frees up some valuable vertical space. But because that necessarily means recalculating the viewport height, it effectively makes the vh unit in CSS very, very limited in scope.

In my initial implementation of Resilient Web Design, the one where I was styling almost everything with vh, the site was unusable. Every time you started scrolling, things would jump around. I had to go back to the drawing board and remove almost all instances of vh from the styles.

I’ve left it in for one use case and I think it’s the most common use of vh: making an element take up exactly the height of the viewport. The front page of the web book uses min-height: 100vh for the title.

Scrolling.

But as soon as you scroll down from there, that element changes height. The content below it suddenly moves.

Let’s say the overall height of the browser window is 600 pixels, of which 50 pixels are taken up by the URL bar. When the page loads, 100vh is 550 pixels. But as soon as you scroll down and the URL bar floats away, the value of 100vh becomes 600 pixels.

(This also causes problems if you’re using vertical media queries. If you choose the wrong vertical breakpoint, then the media query won’t kick in when the page loads but will kick in once the user starts scrolling …or vice-versa.)

There’s a mixed message here. On the one hand, the browser is declaring that the URL bar is part of its interface; that the space is off-limits for content. But then, once scrolling starts, that is invalidated. Now the URL bar is behaving as though it is part of the content scrolling off the top of the viewport.

The result of this messiness is that the vh unit is practically useless for real-world situations with real-world devices. It works great for desktop browsers if you’re grabbing the browser window and resizing, but that’s not exactly a common scenario for anyone other than web developers.

I’m sure there’s a way of solving it with JavaScript but that feels like using an atomic bomb to crack a walnut—the whole point of having this in CSS is that we don’t need to use JavaScript for something related to styling.

It’s such a shame. A piece of CSS that’s great in theory, and is really well supported, just falls apart where it matters most.

Update: There’s a two-year old bug report on this for Chrome, and it looks like it might actually get fixed in February.

Twenty sixteen

When I took a look at back at 2015, it was to remark on how nicely uneventful it was. I wish I could say the same about 2016. Instead, this was the year that too damned much kept happening.

The big picture was dominated by Brexit and Trump, disasters that are sure to shape events for years to come. I try to keep the even bigger picture in perspective and remind myself that our species is doing well, and that we’re successfully battling poverty, illiteracy, violence, pollution, and disease. But it’s so hard sometimes. I still think the overall trend for this decade will be two steps forward, but the closing half is almost certain to be one step back.

Some people close to me have had a really shitty year. More than anything, I wish I could do more to help them.

Right now I’m thinking that one of the best things I could wish for 2017 is for it to be an uneventful year. I’d really like it if the end-of-year round-up in 365 days time had no world-changing events.

But for me personally? 2016 was fine. I didn’t accomplish any big goals—although I’m very proud to have published Resilient Web Design—but I’ve had fun at work, and as always, I’m very grateful for all the opportunities that came my way.

I ate some delicious food…

Short rib. Seabass with carrot-top pesto on beet greens and carrot purée. Bratwurst. Sausage and sauerkraut. Short ribs. Homemade pappardelle with pig cheek ragu. Barbecued Thai chicken. Daily oyster. Kebab. Chicharrones. Nightfireburger. Ribeye.

I went to beautiful places…

Popped in to see Caravaggio and Holbein. Our home for the week. Bodleian go where no one has gone before. Tram. Amsterdam’s looking lovely this morning. Stockholm street. Mauer. Ah, Venice! Barcelona. Malibu sunset. Cuskinny.

And I got to hang out with some lovely doggies…

Mia! Archie is my favourite @EnhanceConf speaker. Mesa, Lola, and @wordridden. Rainier McChedderton! I met Zero! Yay! Thanks, @wilto. On the bright side, Huxley is in the @Clearleft office today. The day Herbie came to visit @Clearleft. It’s Daphne. Poppy’s on patrol. Morty! Scribble is a good dog. Sleepy.

Have a happy—and uneventful—new year!

The many formats of Resilient Web Design

If you don’t like reading in a web browser, you might like to know that Resilient Web Design is now available in more formats.

Jiminy Panoz created a lovely EPUB version. I tried it out in Apple’s iBooks app and it looks great. I tried to submit it to the iBooks store too, but that process threw up a few too many roadblocks.

Oliver Williams has created a MOBI version. That’s means you can read it on a Kindle. I plugged my old Kindle into my computer, dragged that file onto its disc image, and it worked a treat.

And there’s always the PDF versions; one in portrait and another in landscape format. Those were generated straight from the print styles.

Oh, and there’s the podcast. I’ve only released two chapters so far. The Christmas break and an untimely cold have slowed down the release schedule a little bit.

I’d love to make a physical, print-on-demand version of Resilient Web Design available—maybe through Lulu—but my InDesign skills are non-existent.

If you think the book should be available in any other formats, and you fancy having a crack at it, please feel free to use the source files.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Print styles

I really wanted to make sure that the print styles for Resilient Web Design were pretty good—or at least as good as they could be given the everlasting lack of support for many print properties in browsers.

Here’s the first thing I added:

@media print {
  @page {
    margin: 1in 0.5in 0.5in;
    orphans: 4;
    widows: 3;
  }
}

That sets the margins of printed pages in inches (I could’ve used centimetres but the numbers were nice and round in inches). The orphans: 4 declaration says that if there’s less than 4 lines on a page, shunt the text onto the next page. And widows: 3 declares that there shouldn’t be less than 3 lines left alone on a page (instead more lines will be carried over from the previous page).

I always get widows and orphans confused so I remind myself that orphans are left alone at the start; widows are left alone at the end.

I try to make sure that some elements don’t get their content split up over page breaks:

@media print {
  p, li, pre, figure, blockquote {
    page-break-inside: avoid;
  }
}

I don’t want headings appearing at the end of a page with no content after them:

@media print {
  h1,h2,h3,h4,h5 {
    page-break-after: avoid;
  }
}

But sections should always start with a fresh page:

@media print {
  section {
    page-break-before: always;
  }
}

There are a few other little tweaks to hide some content from printing, but that’s pretty much it. Using print preview in browsers showed some pretty decent formatting. In fact, I used the “Save as PDF” option to create the PDF versions of the book. The portrait version comes from Chrome’s preview. The landscape version comes from Firefox, which offers more options under “Layout”.

For some more print style suggestions, have a look at the article I totally forgot about print style sheets. There’s also tips and tricks for print style sheets on Smashing Magazine. That includes a clever little trick for generating QR codes that only appear when a document is printed. I’ve used that technique for some page types over on The Session.

The typography of a web book

I’m a sucker for classic old-style serif typefaces: Caslon, Baskerville, Bembo, Garamond …I love ‘em. That’s probably why I’ve always found the typesetting in Edward Tufte’s books so appealing—he always uses a combination of Bembo for body copy and Gill Sans for headings.

Earlier this year I stumbled on a screen version of Bembo used for Tufte’s digital releases called ET Book. Best of all, it’s open source:

ET Book is a Bembo-like font for the computer designed by Dmitry Krasny, Bonnie Scranton, and Edward Tufte. It is free and open-source.

When I was styling Resilient Web Design, I knew that the choice of typeface would be one of the most important decisions I would make. Remembering that open source ET Book font, I plugged it in to see how it looked. I liked what I saw. I found it particularly appealing when it’s full black on full white at a nice big size (with lower contrast or sizes, it starts to get a bit fuzzy).

I love, love, love the old-style numerals of ET Book. But I was disappointed to see that ligatures didn’t seem to be coming through (even when I had enabled them in CSS). I mentioned this to Rich and of course he couldn’t resist doing a bit of typographic sleuthing. It turns out that the ligature glyphs are there in the source files but the files needed a little tweaking to enable them. Because the files are open source, Rich was able to tweak away to his heart’s content. I then took the tweaked open type files and ran them through Font Squirrel to generate WOFF and WOFF2 files. I’ve put them on Github.

For this book, I decided that the measure would be the priority. I settled on a measure of around 55 to 60 characters—about 10 or 11 words per line. I used a max-width of 27em combined with Mike’s brilliant fluid type technique to maintain a consistent measure.

It looks great on small-screen devices and tablets. On large screens, the font size starts to get really, really big. Personally, I like that. Lots of other people like it too. But some people really don’t like it. I should probably add a font-resizing widget for those who find the font size too shocking on luxuriously large screens. In the meantime, their only recourse is to fork the CSS to make their own version of the book with more familiar font sizes.

The visceral reaction a few people have expressed to the font size reminds me of the flak Jeffrey received when he redesigned his personal site a few years back:

Many people who’ve visited this site since the redesign have commented on the big type. It’s hard to miss. After all, words are practically the only feature I haven’t removed. Some of the people say they love it. Others are undecided. Many are still processing. A few say they hate it and suggest I’ve lost my mind.

I wonder how the people who complained then are feeling now, a few years on, in a world with Medium in it? Jeffrey’s redesign doesn’t look so extreme any more.

Resilient Web Design will be on the web for a very, very, very long time. I’m curious to see if its type size will still look shockingly large in years to come.

Introducing Resilient Web Design

I wrote a thing. The thing is a book. But the book is not published on paper. This book is on the web. It’s a web book. Or “wook” if you prefer …please don’t prefer. Here it is:

Resilient Web Design.

It’s yours for free.

Much of the subject matter will be familiar if you’ve seen my conference talks from the past couple of years, particularly Enhance! and Resilience. But the book ended up taking some twists and turns that surprised me. It turned out to be a bit of a history book: the history of design, the history of the web.

Resilient Web Design is a short book. It’s between sixteen and seventeen megawords long. You could read the whole thing in a couple of hours. Or—because the book has seven chapters—you could take fifteen to twenty minutes out of a day to read one chapter and you’d have read the whole thing done in a week.

If you make websites in any capacity, I hope that this book will resonate with you. Even if you don’t make websites, I still hope there’s an interesting story in there for you.

You can read the whole book on the web, but if you’d rather have a single file to carry around, I’ve made some PDFs as well: one in portrait, one in landscape.

I’ve licensed the book quite liberally. It’s released under a Creative Commons attribution share-alike licence. That means you can re-use the material in any way you want (even commercial usage) as long as you provide some attribution and use the same licence. So if you’d like to release the book in some other format like ePub or anything, go for it.

I’m currently making an audio version of Resilient Web Design. I’ll be releasing it one chapter at a time as a podcast. Here’s the RSS feed if you want to subscribe to it. Or you can subscribe directly from iTunes.

I took my sweet time writing this book. I wrote the first chapter in March 2015. I wrote the last chapter in May 2016. Then I sat on it for a while, figuring out what to do with it. Eventually I decided to just put the whole thing up on the web—it seems fitting.

Whereas the writing took over a year of solid procrastination, making the website went surprisingly quickly. After one weekend of marking up and styling, I had most of it ready to go. Then I spent a while tweaking. The source files are on Github.

I’m pretty happy with the end result. I’ll write a bit more about some of the details over the next while—the typography, the offline functionality, print styles, and stuff like that. In the meantime, I hope you’ll peruse this little book at your leisure…

Resilient Web Design.

If you like it, please spread the word.

Between the braces

In a post called Side Effects in CSS that he wrote a while back, Philip Walton talks about different kinds of challenges in writing CSS:

There are two types of problems in CSS: cosmetic problems and architectural problems.

The cosmetic problems are solved by making something look the way you want it to. The architectural problems are trickier because they have more long-term effects—maintainability, modularity, encapsulation; all that tricky stuff. Philip goes on to say:

If I had to choose between hiring an amazing designer who could replicate even the most complicated visual challenges easily in code and someone who understood the nuances of predictable and maintainable CSS, I’d choose the latter in a heartbeat.

This resonates with something I noticed a while back while I was doing some code reviews. Most of the time when I’m analysing CSS and trying to figure out how “good” it is—and I know that’s very subjective—I’m concerned with what’s on the outside of the curly braces.

selector {
    property: value;
}

The stuff inside the curly braces—the properties and values—that’s where the cosmetic problems get solved. It’s also the stuff that you can look up; I certainly don’t try to store all possible CSS properties and values in my head. It’s also easy to evaluate: Does it make the thing look like you want it to look? Yes? Good. It works.

The stuff outside the curly braces—the selectors—that’s harder to judge. It needs to be evaluated with lots of “what ifs”: What if this selects something you didn’t intend to? What if the markup changes? What if someone else writes some CSS that negates this?

I find it fascinating that most of the innovation in CSS from the browser makers and standards bodies arrives in the form of new properties and values—flexbox, grid, shapes, viewport units, and so on. Meanwhile there’s a whole other world of problems to be solved outside the curly braces. There’s not much that the browser makers or standards bodies can do to help us there. I think that’s why most of the really interesting ideas and thoughts around CSS in recent years have focused on that challenge.

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

The road to Indie Web Camp LA

After An Event Apart San Francisco, which was—as always—excellent, it was time for me to get to the next event: Indie Web Camp Los Angeles. But I wasn’t going alone. Tantek was going too, and seeing as he has a car—a convertible, even—what better way to travel from San Francisco to LA than on the Pacific Coast Highway?

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

Half Moon Bay. Roadtripping with @t. Pomponio beach. Windswept. Salinas. Refueling. Driving through the Californian night. Pismo Beach. On the beach. On the beach with @t. Stopping for a coffee in Santa Barbara. Leaving Pismo Beach. Chevron. Santa Barbara steps. On the road. Driving through Malibu. Malibu sunset. Sun worshippers. Sunset in Santa Monica.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I decided to follow on from what I did at the Brighton Indie Web Camp. There, I made a combined tag view—a way of seeing, for example, everything tagged with “indieweb” instead of just journal entries tagged with “indieweb” or links tagged with “indieweb”. I wanted to do the same thing with my archives. I have separate archives for my journal, my links, and my notes. What I wanted was a combined view.

After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

Greetings from Indie Web Camp LA. Indie Web Camping. Hacking away. Day two of Indie Web Camp LA.