Tags: ie

211

sparkline

Brian Aldiss

After the eclipse I climbed down from the hilltop and reconnected with the world. That’s when I heard the news. Brian Aldiss had passed away.

He had a good innings. A very good innings. He lived to 92 and was writing right up to the end.

I’m trying to remember the first thing I read by Brian Aldiss. I think it might have been The Billion Year Spree, his encyclopaedia of science fiction. The library in my hometown had a copy when I was growing up, and I was devouring everything SF-related.

Decades later I had the great pleasure of meeting the man. It was 2012 and I was in charge of putting together the line-up for that year’s dConstruct. I had the brilliant Lauren Beukes on the line-up all the way from South Africa and I thought it would be fun to organise some kind of sci-fi author event the evening before. Well, one thing led to another: Rifa introduced me to Tim Aldiss, who passed along a request to his father, who kindly agreed to come to Brighton for the event. Then Brighton-based Jeff Noon came on board. The end result was an hour and a half in the company of three fantastic—and fantastically different—authors.

I had the huge honour of moderating the event. Here’s the transcript of that evening and here’s the audio.

That evening and the subsequent dConstruct talks—including the mighty James Burke—combined to create one of the greatest weekends of my life. Seriously. I thought it was just me, but Chris has also written about how special that author event was.

Brian Aldiss, Jeff Noon, and Lauren Beukes on the Brighton SF panel, chaired by Jeremy Keith

Brian Aldiss was simply wonderful that evening. He regaled us with the most marvellous stories, at times hilarious, at other times incredibly touching. He was a true gentleman.

I’m so grateful that I’ll always have the memory of that evening. I’m also very grateful that I have so many Brian Aldiss books still to read.

I’ve barely made a dent into the ludicrously prolific output of the man. I’ve read just some of his books:

  • Non-stop—I’m a sucker for generation starship stories,
  • Hothouse—ludicrously lush and trippy,
  • Greybeard—a grim vision of a childless world before Children Of Men,
  • The Hand-reared Boy—filthy, honest and beautifully written,
  • Heliconia Spring—a deep-time epic …and I haven’t even read the next two books in the series!

Then there are the short stories. Hundreds of ‘em! Most famously Super-Toys Last All Summer Long—inspiration for the Kubrick/Spielberg A.I. film. It’s one of the most incredibly sad stories I’ve ever read. I find it hard to read it without weeping.

Passed by a second-hand book stall on the way into work. My defences were down. Not a bad haul for a fiver.

Whenever a great artist dies, it has become a cliché to say that they will live on through their work. In the case of Brian Aldiss and his astounding output, it’s quite literally true. I’m looking forward to many, many years of reading his words.

My sincerest condolences to his son Tim, his partner Alison, and everyone who knew and loved Brian Aldiss.

60 seconds over Idaho

I lived in Germany for the latter half of the nineties. On August 11th, 1999, parts of Germany were in the path of a total eclipse of the sun. Freiburg—the town where I was living—wasn’t in the path, so Jessica and I travelled north with some friends to Karlsruhe.

The weather wasn’t great. There was quite a bit of cloud coverage, but at the moment of totality, the clouds had thinned out enough for us to experience the incredible sight of a black sun.

(The experience was only slightly marred by the nearby idiot who took a picture with the flash on right before totality. Had my eyesight not adjusted in time, he would still be carrying that camera around with him in an anatomically uncomfortable place.)

Eighteen years and eleven days later, Jessica and I climbed up a hill to see our second total eclipse of the sun. The hill is in Sun Valley, Idaho.

Here comes the sun.

Travelling thousands of miles just to witness something that lasts for a minute might seem disproportionate, but if you’ve ever been in the path of totality, you’ll know what an awe-inspiring sight it is (if you’ve only seen a partial eclipse, trust me—there’s no comparison). There’s a primitive part of your brain screaming at you that something is horribly, horribly wrong with the world, while another part of your brain is simply stunned and amazed. Then there’s the logical part of your brain which is trying to grasp the incredible good fortune of this cosmic coincidence—that the sun is 400 times bigger than the moon and also happens to be 400 times the distance away.

This time viewing conditions were ideal. Not a cloud in the sky. It was beautiful. We even got a diamond ring.

I like to think I can be fairly articulate, but at the moment of totality all I could say was “Oh! Wow! Oh! Holy shit! Woah!”

Totality

Our two eclipses were separated by eighteen years, but they’re connected. The Saros 145 cycle has been repeating since 1639 and will continue until 3009, although the number of total eclipses only runs from 1927 to 2648.

Eighteen years and twelve days ago, we saw the eclipse in Germany. Yesterday we saw the eclipse in Idaho. In eighteen years and ten days time, we plan to be in Japan or China.

Posting to my site

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Notes posting interface

Bookmarklet

This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Text message

This is something I knocked together at Indie Web Camp Brighton 2015 using the Twilio API. It’s handy for posting notes if I’m travelling somewhere and data is at a premium. But I don’t use it that often.

Instagram

Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.

Swarm

Like OwnYourGram, Aaron’s OwnYourSwarm allows me to post check-ins and photos from the Swarm app to my site. Again, micropub makes it all possible.

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.

Syndication

In terms of output, I’ve got a few different ways of syndicating what I post here:

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

Container queries

Every single browser maker has the same stance when it comes to features—they want to hear from developers at the coalface.

“Tell us what you want! We’re listening. We want to know which features to prioritise based on real-world feedback from developers like you.”

“How about container quer—”

“Not that.”

I don’t think it’s an exaggeration to say that literally every web developer I know would love to have container queries. If you’ve worked on any responsive project of any size, you’re bound to have bumped up against the problem of only being able to respond to viewport size, rather than the size of the containing element. Without container queries, our design systems can never be truly modular.

But there’s a divide growing between what our responsive designs need to do, and the tools CSS gives us to meet those needs. We’re making design decisions at smaller and smaller levels, but our code asks us to bind those decisions to a larger, often-irrelevant abstraction of a “page.”

But the message from browser makers has consistently been “it’s simply too hard.”

At the Frontend United conference in Athens a little while back, Jonathan gave a whole talk on the need for container queries. At the same event, Serg gave a talk on Houdini.

Now, as I understand it, Houdini is the CSS arm of the extensible web. Just as web components will allow us to create powerful new HTML without lobbying browser makers, Houdini will allow us to create powerful new CSS features without going cap-in-hand to standards bodies.

At this year’s CSS Day there were two Houdini talks. Tab gave a deep dive, and Philip talked specifically about Houdini as a breakthrough for polyfilling.

During the talks, you could send questions over Twitter that the speaker could be quizzed on afterwards. As Philip was talking, I began to tap out a question: “Could this be used to polyfill container queries?” My thumb was hovering over the tweet button at the very moment that Philip said in his talk, “This could be used to polyfill container queries.”

For that happen, browsers need to implement the layout API for Houdini. But I’m betting that browser makers will be far more receptive to calls to implement the layout API than calls for container queries directly.

Once we have that, there are two possible outcomes:

  1. We try to polyfill container queries and find out that the browser makers were right—it’s simply too hard. This certainty is itself a useful outcome.
  2. We successfully polyfill container queries, and then instead of asking browser makers to figure out implementation, we can hand it to them for standardisation.

But, as Eric Portis points out in his talk on container queries, Houdini is still a ways off (by the way, browser makers, that’s two different conference talks I’ve mentioned about container queries, just in case you were keeping track of how much developers want this).

Still, there are some CSS features that are Houdini-like in their extensibility. Custom properties feel like they could be wrangled to help with the container query problem. While it’s easy to think of custom properties as being like Sass variables, they’re much more powerful than that—the fact they can be a real-time bridge between JavaScript and CSS makes them scriptable. Alas, custom properties can’t be used in media queries but maybe some clever person can figure out a way to get the effect of container queries without a query-like syntax.

However it happens, I’d just love to see some movement on container queries. I’m not alone.

I know container queries would revolutionize my design practice, and better prepare responsive design for mobile, desktop, tablet—and whatever’s coming next.

Patterns Day

Patterns Day is over. It was all I hoped it would be and more.

I’ve got that weird post-conference feeling now, where that all-consuming thing that was ahead of you is now behind you, and you’re not quite sure what to do. Although, comparatively speaking, Patterns Day came together pretty quickly. I announced it less than three months ago. It sold out just over a month later. Now it’s over and done with, it feels like a whirlwind.

The day itself was also somewhat whirlwind-like. It was simultaneously packed to the brim with great talks, and yet over in the blink of an eye. Everyone who attended seemed to have a good time, which makes me very happy indeed. Although, as I said on the day, while it’s nice that everyone came along, I put the line-up together for purely selfish reasons—it was my dream line-up of people I wanted to see speak.

Boy, oh boy, did they deliver the goods! Every talk was great. And I must admit, I was pleased with how I had structured the event. The day started and finished with high-level, almost philosophical talks; the mid section was packed with hands-on nitty-gritty practical examples.

Thanks to sponsorship from Amazon UK, Craig was videoing all the talks. I’ll get them online as soon as I can. But in the meantime, Drew got hold of the audio and made mp3s of each talk. They are all available in handy podcast form for your listening and huffduffing pleasure:

  1. Laura Elizabeth
  2. Ellen de Vries
  3. Sareh Heidari
  4. Rachel Andrew
  5. Alice Bartlett
  6. Jina Anne
  7. Paul Lloyd
  8. Alla Kholmatova

If you’re feeling adventurous, you can play the Patterns Day drinking game while you listen to the talks:

  • Any time someone says “Lego”, take a drink,
  • Any time someone references Chrisopher Alexander, take a drink,
  • Any time someone says that naming things is hard, take a drink,
  • Any time says “atomic design”, take a drink, and
  • Any time says “Bootstrap”, puke the drink back up.

In between the talks, the music was provided courtesy of some Brighton-based artists

Hidde de Vries has written up an account of the day. Stu Robson has also published his notes from each talk. Sarah Drummond wrote down her thoughts on Ev’s blog.

I began the day by predicting that Patterns Day would leave us with more questions than answers …but that they would be the right questions. I think that’s pretty much what happened. Quite a few people compared it to the first Responsive Day Out in tone. I remember a wave of relief flowing across the audience when Sarah opened the show by saying:

I think if we were all to be a little more honest when we talk to each other than we are at the moment, the phrase “winging it” would be something that would come up a lot more often. If you actually speak to people, not very many people have a process for this at the moment. Most of us are kind of winging it.

  • This is hard.
  • No one knows exactly what they’re doing.
  • Nobody has figured this out yet.

Those sentiments were true of responsive design in 2013, and they’re certainly true of design systems in 2017. That’s why I think it’s so important that we share our experiences—good and bad—as we struggle to come to grips with these challenges. That’s why I put Patterns Day together. That’s also why, at the end of the day, I thanked everyone who has ever written about, spoken about, or otherwise shared their experience with design systems, pattern libraries, style guides, and components. And of course I made sure that everyone gave Anna a great big round of applause for her years of dedicated service—I wish she could’ve been there.

There were a few more “thank you”s at the end of the day, and all of them were heartfelt. Thank you to Felicity and everyone else at the Duke of York’s for the fantastic venue and making sure everything went so smoothly. Thank you to AVT for all the audio/visual wrangling. Thanks to Amazon for sponsoring the video recordings, and thanks to Deliveroo for sponsoring the tea, coffee, pastries, and popcorn (they’re hiring, by the way). Huge thanks to Alison and everyone from Clearleft who helped out on the day—Hana, James, Rowena, Chris, Benjamin, Seb, Jerlyn, and most especially Alis who worked behind the scenes to make everything go so smoothly. Thanks to Kai for providing copies of Offscreen Magazine for the taking. Thanks to Marc and Drew for taking lots of pictures. Thanks to everyone who came to Patterns Day, especially the students and organisers from Codebar Brighton—you are my heroes.

Most of all thank you, thank you, thank you, to the eight fantastic speakers who made Patterns Day so, so great—I love you all.

Laura Ellen Sareh Rachel Alice Jina Paul Alla

Talking with the tall man about poetry

When I started making websites in the 1990s, I had plenty of help. The biggest help came from the ability to view source on any web page—the web was a teacher of itself. I also got plenty of help from people who generously shared their knowledge and experience. There was Jeffrey’s Ask Dr. Web, Steve Champeon’s WebDesign-L mailing list, and Jeff Veen’s articles on Webmonkey. Years later, I was able to meet those people. That was a real privilege.

I’ve known Jeff for over a decade now. He’s gone from Adaptive Path to Google to TypeKit to Adobe to True Ventures, and it’s always fascinating to catch up with him and get his perspective on life, the universe, and everything.

He started up a podcast called Presentable about a year ago. It’s worth having a dig through the archives to have a listen to his chats with people like Andy, Jason, Anna, and Jessica. I was honoured when Jeff asked me to be on the show.

We ended up having a really good chat. It’s out now as Episode 25: The Tenuous Resilience of the Open Web. I really enjoyed having a good ol’ natter, and I hope you might enjoy listening to it.

‘Sfunny, but I feel like a few unplanned themes came up a few times. We ended up talking about art, but also about the scientific aspects of design. I couldn’t help but be reminded of the title of Jeff’s classic book, The Art and Science of Web Design.

We also talked about my most recent book, Resilient Web Design, and that’s when I noticed another theme. When discussing the web-first nature of publishing the book, I described the web version as the canonical version and all the other formats as copies that were generated from that. That sounds a lot like how I describe the indie web—something else we discussed—where you have the canonical instance on your own site but share copies on social networks: Publish on Own Site, Syndicate Elsewhere—POSSE.

We also talked about technologies, and it’s entirely possible that we sound like two old codgers on the front porch haranguing those damn kids on the lawn. You can be the judge of that. The audio is available for your huffduffing pleasure. If you enjoy listening to it half as much as I enjoyed doing it, then I enjoyed it twice as much as you.

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Aurora

I remember when I was first recommended to read Kim Stanley Robinson. I was chatting with Jon Tan about science fiction, and I was bemoaning the fact that dystopias seem to be the default setting. Asking "what’s the worst that could happen?" is the over-riding pre-occupation of most sci-fi. Black Mirror is the perfect example of this. Mind you, that’s probably why the ambiguous San Junipero is one of my favourites—utopia? dystopia? dystutopia? You decide.

Anyway, Jon told me I should check out Kim Stanley Robinson’s Three Californias; one book describes a dystopia, one book describes a utopia, and the other—his debut, The Wild Shore—is more ambiguous. I liked the sound of that, but I decided that if I were going to read Kim Stanley Robinson, I should start with his most famous work, the Mars trilogy.

So I read Red Mars. I liked it, but I found it tough going. It’s not exactly a light read. I still haven’t read Green Mars or Blue Mars, though I plan to. I can see why Red Mars is regarded as a classic of hard sci-fi, but it left me somewhat cold. Jessica read The Years of Rice and Salt and had a similar reaction—good premise, thoroughly researched, but tough going.

When I heard about 2312, I couldn’t resist its promise of a jaunt around the solar system. Again, I enjoyed it, but the plot—such as it was—didn’t grab me. I loved the ideas presented in the book. Heck, it inspired one of my Science Hack Day projects. Still, I found that its literary conceit wasn’t enough to carry the book—a character from Saturn who’s saturnian in nature meets a character from Mercury who’s mercurial in nature.

So I was kind of bracing myself for Aurora. Again, the subject matter really appealed to me. I’m a sucker for generation starships. Brian Aldiss’s Non-Stop was a fun read, although in typical Aldiss style, it was weird to the point of psychedelia (even if it looks positively tame next to the batshit crazy world of Hothouse). I was looking forward to reading Robinson’s hard science take on the space ark idea, but I was worried about how much of a slog the writing might be. I read some reviews and listened to some podcasts, and my heart sank when I heard about how the story is partly told by the ship’s AI, who is simultaneously trying to work out how to tell a story. It sounded just like one of those ideas that would be fine for a brief period, but which I could imagine Kim Stanley Robinson dragging out for hundreds of page.

Imagine my surprise when Aurora turned out to be an absolute pleasure. Not only does it have the thoroughly-researched hard science angle of Robinson’s other books, it’s also a rip-roaring tale, in my opinion. I had read of misgivings with the structure of the book—complaints that the story climaxes before the book is halfway done—but I think that misses the point of the story. This is not your typical tale of colonisation. Far from it. Kim Stanley Robinson is quite open about the underlying idea here, that there are certain endeavours that are simply beyond our capacity.

I know that sounds like a very pessimistic view, but I found the book to be a real testament to human ingenuity. But it certainly ruffled quite a few feathers. Like I said, the default setting for most sci-fi is to go negative, but for a sci-fi writer to claim outright that something cannot be done is audacious, and flies in the face of sci-fi tradition.

Gregory Benford wrote a review over on one of my favourite blogs, Centauri Dreams. He takes Robinson to task for stacking the deck against the crew of the ship in Aurora—an inversion of the usual deus ex machina plot devices. I find that criticism puzzling when another review, also on Centauri Dreams, by Stephen Baxter, James Benford and Joseph Miller, takes the book to task for being scientifically naïve.

For me, Aurora was perfectly balanced. It simultaneously captured the wonder of scientific exploration and our own insignificance in the universe. Best of all, it featured central characters that I was utterly invested in—one human, and one artificial. Given my previous experiences with Kim Stanley Robinson books, that was perhaps its greatest achievement. Whereas I might have previously recommended something like 2312, I would have certainly caveated the recommendation. But I wholeheartedly recommend Aurora. It’s easily the best Kim Stanley Robinson book I’ve read so far, and one of the finest science fiction books of recent years. It makes a great companion piece to Neal Stephenson’s Seveneves—not only are they both dealing with space arks, they’ve also got some in-depth descriptions of angular momentum in action, and they’re both thoroughly enjoyable stories that stretch beyond a single human lifespan.

I’m looking forward to digging back through Kim Stanley Robinson’s back catalogue, and I’m very intrigued by his newest book, New York 2140. From listening to his Long Now talk at The Interval, it sounds like the book has as much to say about near-future economics as it does about climate change.

It’s ironic though. Kim Stanley Robinson was first recommended to me because he was one of the few sci-fi writers unafraid to depict a utopia. But his writing never clicked with me until I read Aurora, whose central message sounds like the ultimate downer …that some scientific achievements will forever remain out of reach for humanity.

Checking in at Indie Web Camp Nuremberg

Once I finished my workshop on evaluating technology I stayed in Nuremberg for that weekend’s Indie Web Camp.

IndieWebCamp Nuremberg

Just as with Indie Web Camp Düsseldorf the weekend before, it was a fun two days—one day of discussions, followed by one day of making.

IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg

I spent most of the second day playing around with a new service that Aaron created called OwnYourSwarm. It’s very similar to his other service, OwnYourGram. Whereas OwnYourGram is all about posting pictures from Instagram to your own site, OwnYourSwarm is all about posting Swarm check-ins to your own site.

Usually I prefer to publish on my own site and then push copies out to other services like Twitter, Flickr, etc. (POSSE—Publish on Own Site, Syndicate Elsewhere). In the case of Instagram, that’s impossible because of their ludicrously restrictive API, so I have go the other way around (PESOS—Publish Elsewhere, Syndicate to Own Site). When it comes to check-ins, I could do it from my own site, but I’d have to create my own databases of places to check into. I don’t fancy that much (yet) so I’m using OwnYourSwarm to PESOS check-ins.

The great thing about OwnYourSwarm is that I didn’t have to do anything. I already had the building blocks in place.

First of all, I needed some way to authenticate as my website. IndieAuth takes care of all that. All I needed was rel="me" attributes pointing from my website to my profiles on Twitter, Flickr, Github, or any other services that provide OAuth. Then I can piggyback on their authentication flow (this is also how you sign in to the Indie Web wiki).

The other step is more involved. My site needs to provide an API endpoint so that services like OwnYourGram and OwnYourSwarm can post to it. That’s where micropub comes in. You can see the code for my minimal micropub endpoint if you like. If you want to test your own micropub endpoint, check out micropub.rocks—the companion to webmention.rocks.

Anyway, I already had IndieAuth and micropub set up on my site, so all I had to do was log in to OwnYourSwarm and I immediately started to get check-ins posted to my own site. They show up the same as any other note, so I decided to spend my time at Indie Web Camp Nuremberg making them look a bit different. I used Mapbox’s static map API to show an image of the location of the check-in. What’s really nice is that if I post a photo on Swarm, that gets posted to my own site too. I had fun playing around with the display of photo+map on my home page stream. I’ve made a page for keeping track of check-ins too.

All in all, a fun way to spend Indie Web Camp Nuremberg. But when it came time to demo, the one that really impressed me was Amber’s. She worked flat out on her site, getting to the second level on IndieWebify.me …including sending a webmention to my site!

IndieWebCamp Nuremberg

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Patterns Day speakers

Ticket sales for Patterns Day are going quite, quite briskly. If you’d like to come along, but you don’t yet have a ticket, you might want to remedy that. Especially when you hear about who else is going to be speaking…

Sareh Heidari works at the BBC building websites for a global audience, in as many as twenty different languages. If you want to know about strategies for using CSS at scale, you definitely want to hear this talk. She just stepped off stage at the excellent CSSconf EU in Berlin, and I’m so happy that Sareh’s coming to Brighton!

Patterns Day isn’t the first conference about design systems and pattern libraries on the web. That honour goes to the Clarity conference, organised by the brilliant Jina Anne. I was gutted I couldn’t make it to Clarity last year. By all accounts, it was excellent. When I started to form the vague idea of putting on an event here in the UK, I immediately contacted Jina to make sure she was okay with it—I didn’t want to step on her toes. Not only was she okay with it, but she really wanted to come along to attend. Well, never mind attending, I said, how about speaking?

I couldn’t be happier that Jina agreed to speak. She has had such a huge impact on the world of pattern libraries through her work with the Lightning design system, Clarity, and the Design Systems Slack channel.

The line-up is now complete. Looking at the speakers, I find myself grinning from ear to ear—it’s going to be an honour to introduce each and every one of them.

This is going to be such an excellent day of fun and knowledge. I can’t wait for June 30th!

Code (p)reviews

I’m not a big fan of job titles. I’ve always had trouble defining what I do as a noun—I much prefer verbs (“I make websites” sounds fine, but “website maker” sounds kind of weird).

Mind you, the real issue is not finding the right words to describe what I do, but rather figuring out just what the heck it is that I actually do in the first place.

According to the Clearleft website, I’m a technical director. That doesn’t really say anything about what I do. To be honest, I tend to describe my work these days in terms of what I don’t do: I don’t tend to write a lot of HTML, CSS, and JavaScript on client projects (although I keep my hand in with internal projects, and of course, personal projects).

Instead, I try to make sure that the people doing the actual coding—Mark, Graham, and Danielle—are happy and have everything they need to get on with their work. From outside, it might look like my role is managerial, but I see it as the complete opposite. They’re not in service to me; I’m in service to them. If they’re not happy, I’m not doing my job.

There’s another aspect to this role of technical director, and it’s similar to the role of a creative director. Just as a creative director is responsible for the overall direction and quality of designs being produced, I have an oversight over the quality of front-end output. I don’t want to be a bottleneck in the process though, and to be honest, most of the time I don’t do much checking on the details of what’s being produced because I completely trust Mark, Graham, and Danielle to produce top quality code.

But I feel I should be doing more. Again, it’s not that I want to be a bottleneck where everything needs my approval before it gets delivered, but I hope that I could help improve everyone’s output.

Now the obvious way to do this is with code reviews. I do it a bit, but not nearly as much as I should. And even when I do, I always feel it’s a bit late to be spotting any issues. After all, the code has already been written. Also, who am I to try to review the code produced by people who are demonstrably better at coding than I am?

Instead I think it will be more useful for me to stick my oar in before a line of code has been written; to sit down with someone and talk through how they’re going to approach solving a particular problem, creating a particular pattern, or implementing a particular user story.

I suppose it’s really not that different to rubber ducking. Having someone to talk out loud with about potential solutions can be really valuable in my experience.

So I’m going to start doing more code previews. I think it will also incentivise me to do more code reviews—being involved in the initial discussion of a solution means I’m going to want to see the final result.

But I don’t think this should just apply to front-end code. I’d also like to exercise this role as technical director with the designers on a project.

All too often, decisions are made in the design phase that prove problematic in development. It usually works out okay, but it often means revisiting the designs in light of some technical considerations. I’d like to catch those issues sooner. That means sticking my nose in much earlier in the process, talking through what the designers are planning to do, and keeping an eye out for any potential issues.

So, as technical director, I won’t be giving feedback like “the colour’s not working for me” or “not sure about those type choices” (I’ll leave that to the creative director), but instead I can ask questions like “how will this work without hover?” or “what happens when the user does this?” as well as pointing out solutions that might be tricky or time-consuming to implement from a technical perspective.

What I want to avoid is the swoop’n’poop, when someone seagulls in after something has been designed or built and points out all the problems. The earlier in the process any potential issues can be spotted, the better.

And I think that’s my job.

Writing on the web

Some people have been putting Paul’s crazy idea into practice.

  • Mike revived his site a while back and he’s been posting gold dust ever since. I enjoy his no-holds-barred perspective on his time in San Francisco.
  • Garrett’s writing goes all the way back to 2005. The cumulative result is two fascinating interweaving narratives—one about his health, another about his business.
  • Charlotte has been documenting her move from Brighton to Sydney. Much as I love her articles about front-end development, I’m liking the slice-of-life updates on life down under even more.
  • Amber has a great way with words. As well as regularly writing on her blog, she’s two-thirds of the way through writing 100 words every day for 100 days.
  • Ethan has been writing about responsive design—of course—but it’s his more personal posts that make me really grateful for his site.
  • Jeffrey and Eric never stopped writing on their own sites. Sure, there’s good stuff on their about web design and development, but it’s the writing about their non-web lives that’s so powerful.

There are more people I could mention …but, to be honest, not that many more. Seems like most people are happy to only publish on Ev’s blog or not at all.

I know not everybody wants to write on the web, and that’s fine. But it makes me sad when people choose not to publish their thoughts because they think no-one will be interested, or that it’s all been said before. I understand where those worries come from, but I believe—no, I know—that they are unfounded.

It’s a world wide web out there. There’s plenty of room for everyone. And I, for one, love reading the words of others.

Progressive Web App questions

I got a nice email recently from Colin van Eenige. He wrote:

For my graduation project I’m researching the development of Progressive Web Apps and found your offline book called resilient web design. I was very impressed by the implementation of the website and it really was a nice experience.

I’m very interested in your vision on progressive web apps and what capabilities are waiting for us regarding offline content. Would it be fine if I’d send you some questions?

I said that would be fine, although I couldn’t promise a swift response. He sent me four questions. I finally got ‘round to sending my answers…

1. https://resilientwebdesign.com/ is an offline web book (progressive web app). What was the primary reason make it available like this (besides the other formats)?

Well, given the subject matter, it felt right that the canonical version of the book should be not just online, but made with the building blocks of the web. The other formats are all nice to have, but the HTML version feels (to me) like the “real” book.

Interestingly, it wasn’t too much trouble for people to generate other formats from the HTML (ePub, MOBI, PDF), whereas I think trying to go in the other direction would be trickier.

As for the offline part, that felt like a natural fit. I had already done that with a previous book of mine, HTML5 For Web Designers, which I put online a year or two after its print publication. In that case, I used AppCache for the offline functionality. AppCache is horrible, but this use case might be one of the few where it works well: a static book that’s never going to change. Cache invalidation is one of the worst parts of using AppCache so by not having any kinds of updates at all, I dodged that bullet.

But when it came time for Resilient Web Design, a service worker was definitely the right technology. Still, I’ve got AppCache in there as well for the browsers that don’t yet support service workers.

2. What effect you you think Progressive Web Apps will have on content consuming and do you think these will take over the purpose of some Native Apps?

The biggest effect that service workers could have is to change the expectations that people have about using the web, especially on mobile devices. Right now, people associate the web on mobile with long waits and horrible spammy overlays. Service workers can help solve that first part.

If people then start adding sites to their home screen, that will be a great sign that the web is really holding its own. But I don’t think we should get too optimistic about that: for a user, there’s no difference between a prompt on their screen saying “add to home screen” and a prompt on their screen saying “download our app”—they’re equally likely to be dismissed because we’ve trained people to dismiss anything that covers up the content they actually came for.

It’s entirely possible that websites could start taking over much of the functionality that previously was only possible in a native app. But I think that inertia and habit will keep people using native apps for quite some time.

The big exception is in markets where storage space on devices is in short supply. That’s where the decision to install a native app isn’t taken likely (given the choice between your family photos and an app, most people will reject the app). The web can truly shine here if we build lightweight, performant services.

Even in that situation, I’m still not sure how many people will end up adding those sites to their home screen (it might feel so similar to installing a native app that there may be some residual worry about storage space) but I don’t think that’s too much of a problem: if people get to a site via search or typing, that’s fine.

I worry that the messaging around “progressive web apps” is perhaps over-fetishising the home screen. I don’t think that’s the real battleground. The real battleground is in people’s heads; how they perceive the web and how they perceive native.

After all, if the average number of native apps installed in a month is zero, then that’s not exactly a hard target to match. :-)

3. What is your vision regarding Progressive Web Apps?

For me, progressive web apps don’t feel like a separate thing from making websites. I worry that the marketing of them might inflate expectations or confuse people. I like the idea that they’re simply websites that have taken their vitamins.

So my vision for progressive web apps is the same as my vision for the web: something that people use every day for all sorts of tasks.

I find it really discouraging that progressive web apps are becoming conflated with single page apps and the app shell model. Those architectural decisions have nothing to do with service workers, HTTPS, and manifest files. Yet I keep seeing the concepts used interchangeably. It would be a real shame if people chose not to use these great technologies just because they don’t classify what they’re building as an “app.”

If anything, it’s good ol’ fashioned content sites (newspapers, wikipedia, blogs, and yes, books) that can really benefit from the turbo boost of service worker+HTTPS+manifest.

I was at a conference recently where someone was given a talk encouraging people to build progressive web apps but discouraging people from doing it for their own personal sites. That’s a horrible, elitist attitude. I worry that this attitude is being codified in the term “progressive web app”.

4. What is the biggest learning you’ve had since working on Progressive Web Apps?

Well, like I said, I think that some people are focusing a bit too much on the home screen and not enough on the benefits that service workers can provide to just about any website.

My biggest learning is that these technologies aren’t for a specific subset of services, but can benefit just about anything that’s on the web. I mean, just using a service worker to explicitly cache static assets like CSS, JS, and some images is a no-brainer for almost any project.

So there you go—I’m very excited about the capabilities of these technologies, but very worried about how they’re being “sold”. I’m particularly nervous that in the rush to emulate native apps, we end up losing the very thing that makes the web so powerful: URLs.

Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.

Audio book

I’ve recorded each chapter of Resilient Web Design as MP3 files that I’ve been releasing once a week. The final chapter is recorded and released so my audio work is done here.

If you want subscribe to the podcast, pop this RSS feed into your podcast software of choice. Or use one of these links:

Or if you can have it as one single MP3 file to listen to as an audio book. It’s two hours long.

So, for those keeping count, the book is now available as HTML, PDF, EPUB, MOBI, and MP3.

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

Contact

I left the office one evening a few weeks back, and while I was walking up the street, James Box cycled past, waving a hearty good evening to me. I didn’t see him at first. I was in a state of maximum distraction. For one thing, there was someone walking down the street with a magnificent Irish wolfhound. If that weren’t enough to dominate my brain, I also had headphones in my ears through which I was listening to an audio version of a TED talk by Donald Hoffman called Do we really see reality as it is?

It’s fascinating—if mind-bending—stuff. It sounds like the kind of thing that’s used to justify Deepak Chopra style adventures in la-la land, but Hoffman is deliberately taking a rigorous approach. He knows his claims are outrageous, but he welcomes all attempts to falsify his hypotheses.

I’m not noticing this just from a short TED talk. It’s been one of those strange examples of synchronicity where his work has been popping up on my radar multiple times. There’s an article in Quanta magazine that was also republished in The Atlantic. And there’s a really good interview on the You Are Not So Smart podcast that I huffduffed a while back.

But the most unexpected place that Hoffman popped up was when I was diving down a SETI (or METI) rabbit hole. There I was reading about the Cosmic Call project and Lincos when I came across this article: Why ‘Arrival’ Is Wrong About the Possibility of Talking with Space Aliens, with its subtitle “Human efforts to communicate with extraterrestrials are doomed to failure, expert says.” The expert in question pulling apart the numbers in the Drake equation turned out to be none other than Donald Hoffmann.

A few years ago, at a SETI Institute conference on interstellar communication, Hoffman appeared on the bill after a presentation by radio astronomer Frank Drake, who pioneered the search for alien civilizations in 1960. Drake showed the audience dozens of images that had been launched into space aboard NASA’s Voyager probes in the 1970s. Each picture was carefully chosen to be clearly and easily understood by other intelligent beings, he told the crowd.

After Drake spoke, Hoffman took the stage and “politely explained how every one of the images would be infinitely ambiguous to extraterrestrials,” he recalls.

I’m sure he’s quite right. But let’s face it, the Voyager golden record was never really about communicating with an alien intelligence …it was about how we present ourself.