Journal tags: sharing

35

sparkline

Web Share API test

Remember a while back I wrote about some odd behaviour with the Web Share API in Safari on iOS?

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence.

Tess filed a bug soon after, which was very gratifying to see.

Now Phil has put together a test case:

  1. Share URL, title, and text
  2. Share URL and title
  3. Share URL and text

Very handy! The results (using the “copy” to clipboard action) are somewhat like rock, paper, scissors:

  • URL beats title,
  • text beats URL,
  • nothing beats text.

So it’s more like rock, paper, high explosives.

Design systems roundup

When I started writing a post about architects, gardeners, and design systems, it was going to be a quick follow-up to my post about web standards, dictionaries, and design systems. I had spotted an interesting metaphor in one of Frank’s posts, and I thought it was worth jotting it down.

But after making that connection, I kept writing. I wanted to point out the fetishism we have for creation over curation; building over maintenance.

Then the post took a bit of a dark turn. I wrote about how the most commonly cited reasons for creating a design system—efficiency and consistency—are the same processes that have led to automation and dehumanisation in the past.

That’s where I left things. Others have picked up the baton.

Dave wrote a post called The Web is Industrialized and I helped industrialize it. What I said resonated with him:

This kills me, but it’s true. We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it. We operate more like Taylor and his stopwatch and Gantt and his charts, maximizing effort and impact rather than focusing on the human aspects of product development.

But he also points out the many benefits of systemetising:

At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.

Emphasis mine. I think that’s a key phrase: “a willing team.”

Ethan tackles this in his post The design systems we swim in:

A design system that optimizes for consistency relies on compliance: specifically, the people using the system have to comply with the system’s rules, in order to deliver on that promised consistency. And this is why that, as a way of doing something, a design system can be pretty dehumanizing.

But a design system need not be a constraining straitjacket—a means of enforcing consistency by keeping creators from colouring outside the lines. Used well, a design system can be a tool to give creators more freedom:

Does the system you work with allow you to control the process of your work, to make situational decisions? Or is it simply a set of rules you have to follow?

This is key. A design system is the product of an organisation’s culture. That’s something that Brad digs into his post, Design Systems, Agile, and Industrialization:

I definitely share Jeremy’s concern, but also think it’s important to stress that this isn’t an intrinsic issue with design systems, but rather the organizational culture that exists or gets built up around the design system. There’s a big difference between having smart, reusable patterns at your disposal and creating a dictatorial culture designed to enforce conformity and swat down anyone coloring outside the lines.

Brad makes a very apt comparison with Agile:

Not Agile the idea, but the actual Agile reality so many have to suffer through.

Agile can be a liberating empowering process, when done well. But all too often it’s a quagmire of requirements, burn rates, and story points. We need to make sure that design systems don’t suffer the same fate.

Jeremy’s thoughts on industrialization definitely struck a nerve. Sure, design systems have the ability to dehumanize and that’s something to actively watch out for. But I’d also say to pay close attention to the processes and organizational culture we take part in and contribute to.

Matthew Ström weighed in with a beautifully-written piece called Breaking looms. He provides historical context to the question of automation by relaying the story of the Luddite uprising. Automation may indeed be inevitable, according to his post, but he also provides advice on how to approach design systems today:

We can create ethical systems based in detailed user research. We can insist on environmental impact statements, diversity and inclusion initiatives, and human rights reports. We can write design principles, document dark patterns, and educate our colleagues about accessibility.

Finally, the ouroboros was complete when Frank wrote down his thoughts in a post called Who cares?. For him, the issue of maintenance and care is crucial:

Care applies to the built environment, and especially to digital technology, as social media becomes the weather and the tools we create determine the expectations of work to be done and the economic value of the people who use those tools. A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement. Tools are always beholden to values. This is well-trodden territory.

Well-trodden territory indeed. Back in 2015, Travis Gertz wrote about Design Machines:

Designing better systems and treating our content with respect are two wonderful ideals to strive for, but they can’t happen without institutional change. If we want to design with more expression and variation, we need to change how we work together, build design teams, and forge our tools.

Also on the topic of automation, in 2018 Cameron wrote about Design systems and technological disruption:

Design systems are certainly a new way of thinking about product development, and introduce a different set of tools to the design process, but design systems are not going to lessen the need for designers. They will instead increase the number of products that can be created, and hence increase the demand for designers.

And in 2019, Kaelig wrote:

In order to be fulfilled at work, Marx wrote that workers need “to see themselves in the objects they have created”.

When “improving productivity”, design systems tooling must be mindful of not turning their users’ craft into commodities, alienating them, like cogs in a machine.

All of this is reminding me of Kranzberg’s first law:

Technology is neither good nor bad; nor is it neutral.

I worry that sometimes the messaging around design systems paints them as an inherently positive thing. But design systems won’t fix your problems:

Just stay away from folks who try to convince you that having a design system alone will solve something.

It won’t.

It’s just the beginning.

At the same time, a design system need not be the gateway drug to some kind of post-singularity future where our jobs have been automated away.

As always, it depends.

Remember what Frank said:

A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement.

The reasons for creating a design system matter. Those reasons will probably reflect the values of the company creating the system. At the level of reasons and values, we’ve gone beyond the bounds of the hyperobject of design systems. We’re dealing in the area of design ops—the whys of systemising design.

This is why I’m so wary of selling the benefits of design systems in terms of consistency and efficiency. Those are obviously tempting money-saving benefits, but followed to their conclusion, they lead down the dark path of enforced compliance and eventually, automation.

But if the reason you create a design system is to empower people to be more creative, then say that loud and proud! I know that creativity, autonomy and empowerment is a tougher package to sell than consistency and efficiency, but I think it’s a battle worth fighting.

Design systems are neither good nor bad (nor are they neutral).

Addendum: I’d just like to say how invigorating it’s been to read the responses from Dave, Ethan, Brad, Matthew, and Frank …all of them writing on their own websites. Rumours of the demise of blogging may have been greatly exaggerated.

2019 in numbers

I posted to adactio.com 1,600 times in 2019: sparkline

In amongst those notes were:

If you like, you can watch all that activity plotted on a map.

map

Away from this website in 2019:

Words I wrote in 2019

I wrote just over one hundred blog posts in 2019. That’s even more than I wrote in 2018, which I’m very happy with.

Here are eight posts from during the year that I think are a good representative sample. I like how these turned out.

I hope that I’ll write as many blog posts in 2020.

I’m pretty sure that I will also continue to refer to them as blog posts, not blogs. I may be the last holdout of this nomenclature in 2020. I never planned to die on this hill, but here we are.

Actually, seeing as this is technically my journal rather than my blog, I’ll just call them journal entries.

Here’s to another year of journal entries.

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.

2018 in numbers

I posted to adactio.com 1,387 times in 2018: sparkline

In amongst those notes were:

In my blog posts, the top tags were:

  1. frontend and development (42 posts), sparkline
  2. serviceworkers (27 posts), sparkline
  3. design (20 posts), sparkline
  4. writing and publishing (19 posts), sparkline
  5. javascript (18 posts). sparkline

In my links, the top tags were:

  1. development (305 links), sparkline
  2. frontend (289 links), sparkline
  3. design (178 links), sparkline
  4. css (110 links), sparkline
  5. javascript (106 links). sparkline

When I wasn’t updating this site:

But these are just numbers. To get some real end-of-year thoughts, read posts by Remy, Andy, Ana, or Bill Gates.

Words I wrote in 2018

I wrote just shy of a hundred blog posts in 2018. That’s an increase from 2017. I’m happy about that.

Here are some posts that turned out okay…

A lot of my writing in 2018 was on technical topics—front-end development, service workers, and so on—but I should really make more of an effort to write about a wider range of topics. I always like when Zeldman writes about his glamourous life. Maybe in 2019 I’ll spend more time letting you know what I had for lunch.

I really enjoy writing words on this website. If I go too long between blog posts, I start to feel antsy. The only relief is to move my fingers up and down on the keyboard and publish something. Sounds like a bit of an addiction, doesn’t it? Well, as habits go, this is probably one of my healthier ones.

Thanks for reading my words in 2018. I didn’t write them for you—I wrote them for me—but it’s always nice when they resonate with others. I’ll keep on writing my brains out in 2019.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Several people are writing

Anne Gibson writes:

It sounds easy to make writing a habit, but like every other habit that doesn’t involve addictive substances like nicotine or dopamine it’s hard to start and easy to quit.

Alice Bartlett writes:

Anyway, here we are, on my blog, or in your RSS reader. I think I’ll do weaknotes. Some collections of notes. Sometimes. Not very well written probably. Generally written with the urgency of someone who is waiting for a baby wake up.

Patrick Rhone writes:

Bottom line; please place any idea worth more than 280 characters and the value Twitter places on them (which is zero) on a blog that you own and/or can easily take your important/valuable/life-changing ideas with you and make them easy for others to read and share.

Sara Soueidan writes:

What you write might help someone understand a concept that you may think has been covered enough before. We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from. And if no one reads your article, then that’s also okay. That voice telling you that people are just sitting somewhere watching our every step and judging us based on the popularity of our writing is a big fat pathetic attention-needing liar.

Laura Kalbag writes:

The web can be used to find common connections with folks you find interesting, and who don’t make you feel like so much of a weirdo. It’d be nice to be able to do this in a safe space that is not being surveilled.

Owning your own content, and publishing to a space you own can break through some of these barriers. Sharing your own weird scraps on your own site makes you easier to find by like-minded folks.

Brendan Dawes writes:

At times I think “will anyone reads this, does anyone care?”, but I always publish it anyway — and that’s for two reasons. First it’s a place for me to find stuff I may have forgotten how to do. Secondly, whilst some of this stuff is seemingly super-niche, if one person finds it helpful out there on the web, then that’s good enough for me. After all I’ve lost count of how many times I’ve read similar posts that have helped me out.

Robin Rendle writes:

My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains a weird thing you struggled with on the Internet, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this thing then be sure to jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.

Khoi Vinh writes:

Maybe you feel more comfortable writing in short, concise bullets than at protracted, grandiose length. Or maybe you feel more at ease with sarcasm and dry wit than with sober, exhaustive argumentation. Or perhaps you prefer to knock out a solitary first draft and never look back rather than polishing and tweaking endlessly. Whatever the approach, if you can do the work to find a genuine passion for writing, what a powerful tool you’ll have.

Amber Wilson writes:

I want to finally begin writing about psychology. A friend of mine shared his opinion that writing about this is probably best left to experts. I tried to tell him I think that people should write about whatever they want. He argued that whatever he could write about psychology has probably already been written about a thousand times. I told him that I’m going to be writer number 1001, and I’m going to write something great that nobody has written before.

Austin Kleon writes:

Maybe I’m weird, but it just feels good. It feels good to reclaim my turf. It feels good to have a spot to think out loud in public where people aren’t spitting and shitting all over the place.

Tim Kadlec writes:

I write to understand and remember. Sometimes that will be interesting to others, often it won’t be.

But it’s going to happen. Here, on my own site.

You write…

Service worker resources

At the end of my new book, Going Offline, I have a little collection of resources relating to service workers. Here’s how I introduce them:

If this book were a podcast, then this would be the point at which I would be imploring you to rate me on iTunes (or I’d be telling you about a really good mattress). Instead, I’d like to give you some hyperlinks so that you can explore some of the topics in this brief book in more detail.

It always feels a little strange to publish a list of hyperlinks in a physical book, so I figured I’d republish them here for easy access…

Explanations

Guides

Examples

Progressive web apps

Tools

Documentation

Posting to my site

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Notes posting interface

Bookmarklet

This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Text message

This is something I knocked together at Indie Web Camp Brighton 2015 using the Twilio API. It’s handy for posting notes if I’m travelling somewhere and data is at a premium. But I don’t use it that often.

Instagram

Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.

Swarm

Like OwnYourGram, Aaron’s OwnYourSwarm allows me to post check-ins and photos from the Swarm app to my site. Again, micropub makes it all possible.

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.

Syndication

In terms of output, I’ve got a few different ways of syndicating what I post here:

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

Open source

Building and maintaining an open-source project is hard work. That observation is about as insightful as noting the religious affiliation of the pope or the scatological habits of woodland bears.

Nolan Lawson wrote a lengthy post describing what it feels like to be an open-source maintainer.

Outside your door stands a line of a few hundred people. They are patiently waiting for you to answer their questions, complaints, pull requests, and feature requests.

You want to help all of them, but for now you’re putting it off. Maybe you had a hard day at work, or you’re tired, or you’re just trying to enjoy a weekend with your family and friends.

But if you go to github.com/notifications, there’s a constant reminder of how many people are waiting

Most of the comments on the post are from people saying “Yup, I hear ya!”

Jan wrote a follow-up post called Sustainable Open Source: The Maintainers Perspective or: How I Learned to Stop Caring and Love Open Source:

Just because there are people with problems in front of your door, that doesn’t mean they are your problems. You can choose to make them yours, but you want to be very careful about what to care about.

There’s also help at hand in the shape of Open Source Guides created by Nadia Eghbal:

A collection of resources for individuals, communities, and companies who want to learn how to run and contribute to an open source project.

I’m sure Mark can relate to all of the tales of toil that come with being an open-source project maintainer. He’s been working flat-out on Fractal, sometimes at work, but often at home too.

Fractal isn’t really a Clearleft project, at least not in the same way that something like Silverback or UX London is. We’re sponsoring Fractal as much as we can, but an open-source project doesn’t really belong to anyone; everyone is free to fork it and take it. But I still want to make sure that Mark and Danielle have time at work to contribute to Fractal. It’s hard to balance that with the bill-paying client work though.

I invited Remy around to chat with them last week. It was really valuable. Mind you, Remy was echoing many of the same observations made in Nolan’s post about how draining this can be.

So nobody here is under any illusions that this open-source lark is to be entered into lightly. It can be a gruelling exercise. But then it can also be very, very rewarding. One kind word from somebody using your software can make your day. I was genuinely pleased as punch when Danish agency Shift sent Mark a gift to thank him for all his hard work on Fractal.

People can be pretty darn great (which I guess is an underlying principle of open source).

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Exploring web technologies

Last week, I had two really enjoyable experiences discussing completely opposite ends of the web technology stack.

Tuesday is Codebar day here in Brighton. Clearleft hosted it at 68 Middle Street last week. I really, really enjoy coaching at Codebar. I particularly like teaching the absolute basics of HTML and CSS. There’s something so rewarding about seeing the “a-ha!” moments when concepts click with people. I also love answering the inevitable questions that arise, like “why is it like that?”, or “how do I do this?”

Fantastic coding tonight! Great to see you all. Thanks for coming and thanks @68MiddleSt & @clearleft for having us.

Thursday was devoted to the opposite end of the spectrum. I ran a workshop at Clearleft with some developers from one of our clients. The whole day was dedicated to exploring and evaluating up-and-coming web technologies. Basically, it was a chance to geek out about all the stuff I’ve been linking to or writing about. During the workshop I ended up making a lot of use of my tagging system here on adactio.com:

Prioritising topics for discussion.

Web components and service workers ended up at the top of the list of technologies to tackle, which was fortuitous, given my recent thoughts on comparing the two:

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Those two questions turned out to be a good framework for the whole workshop. The question of how to evaluate technologies is something I’ve been thinking about a lot lately. I’m pretty sure it will be what my next conference talk is going to be all about.

You can read more about the structure of the workshop over on the Clearleft site. I’m looking forward to running it again sometime. But I’m equally looking forward to getting back to the basics at the next Codebar.

Independently published

Jessica writes about The Heroine’s Journey.

Remy explains Why I love working with the web.

Ludwig dreams of designers and developers working Together.

Charlotte documents her technique Teaching the order of margins in CSS.

Craig field-tests The Leica Q.

Robin thinks about The New Web Typography.

Michael dives deep into A Complete History of the Millennium Falcon.

What do they all have in common? Nothing …other than the fact that each person chose to write on their own website. I’m grateful for that. These are all wonderful pieces of writing—they deserve a long life.

Huffduffing for podcasters

I was pointed to this discussion thread which is talking about how to make podcast episodes findable for services like Huffduffer.

The logic behind Huffduffer’s bookmarklet goes something like this…

  1. Find any a elements that have href values ending in “.mp3” or “.m4a”.
  2. If there’s just one audio on the page, use that.
  3. If there are multiple audio, offer a list to the user and have them choose.

If that doesn’t work…

  1. Look for a link element with a rel value of “enclosure”.
  2. Look for a meta element property value of “og:audio”.
  3. Look for audio elements and grab either the src attribute of the element itself, or the src attribute of any source elements within the audio element.

If that doesn’t work…

  1. Try to find a link to an RSS feed (a link that looks like “rss” or “feed” or “atom”).
  2. If there is a feed, parse that for enclosure elements and present that list to the user.

That covers 80-90% of use cases. There are still situations where the actual audio file for a podcast episode is heavily obfuscated—either with clickjacking JavaScript “download” links, or links that point to a redirection to the actual file.

If you have a podcast and you want your episodes to be sharable and huffduffable, you have a few options:

Have a link to the audio file for the episode somewhere on the page, something like:

<a href="/path/to/file.mp3">download</a>

That’s the simplest option. If you’re hosting with Soundcloud, this is pretty much impossible to accomplish: they deliberately obfuscate and time-limit the audio file, even if you want it to be downloadable (that “download” link literally only allows a user to download that file in that moment).

If you don’t want a visible link on the page, you could use metadata in the head of your document. Either:

<link rel="enclosure" href="/path/to/file.mp3">

Or:

<meta property="og:audio" content="/path/to/file.mp3">

And if you want to encourage people to huffduff an episode of your podcast, you can also include a “huffduff it” link, like this:

<a href="https://huffduffer.com/add?page=referrer">huffduff it</a>

You can also use ?page=referer—that misspelling has become canonised thanks to HTTP.

There you go, my podcasting friends. However you decide to do it, I hope you’ll make your episodes sharable.

Small lessons, loosely learned

When Charlotte published her end-of-year report, she outlined her plans for 2016, which included “Document my JavaScript learning journey.”

I want to get into the habit of writing one JavaScript post every week to make sure I keep up with learning it. Even if it’s just a few words about some relevant terminology; it can be as long or short as desired or time allows, as long as it happens.

An excellent plan! If you really want to make sure you’ve understood something, write down an explanation of it. There’s nothing quite like writing to really test your grasp of an idea. Even if nobody else ever reads it, it’s still an extremely valuable exercise for yourself.

Charlotte has already started. Here’s a short post on using JavaScript to pick a random an item at random from an array:

Math.round(Math.random() * (array.length - 1))

It might seem like a small thing, but look what you have to understand:

  • How Math.round works (pretty straightforward—it rounds a floating point number to the nearest whole number).
  • How Math.random works (less straightforward—it gives a random number between zero and one, meaning you have to multiply it to do anything useful with the result).
  • How array.length works (seems straightforward—it gives you the total number of items in an array, but then catches you out when you realise there can never be an index with that total value because the indices are counted from zero …which gives rise to an entire class of programming error).

I really like this approach to learning: document each small thing as you go instead of waiting until all those individual pieces click together. That’s the approach Andy took when he was learning CSS and it led to him writing a book on the subject.

When it comes to problem-solving in general (and JavaScript in particular), I have a similar bias towards single-purpose solutions. Rather than creating a monolithic framework that attempts to solve all possible problems, I prefer a collection of smaller scripts that only do one thing, but do it really well; the UNIX philosophy.

Take, for example, Remy’s bind.js. It does two-way data-binding and nothing else. If you only need one-way data-binding, there’s Simulacra.js, which takes a similar minimalist, hands-off approach.

This approach of breaking large problems down into a collection of smaller problems also came up in a completely unrelated discussion at work recently. I floated the idea of starting a book club at Clearleft. Quite a few people are into the idea, but they’re not sure they could commit the time to reading a book on a deadline. Fair enough. Perhaps we could have the book club on a chapter-by-chapter basis? I don’t think that would work all that well for novels, but it did make me think of something related to Charlotte’s stated goal of learning more JavaScript…

Graham has been raving about the You Don’t Know JS book series by the supersmart Kyle Simpson. I suggested to Charlotte that we read through the books at the rate of one chapter a week. The first book is called Up and Going and our first chapter will be Into Programming, starting this week. Then at the end of the week we’ll get together to compare notes.

I’m hoping that by doing this together, there’s more chance that we’ll actually see it through to completion:

Why can I hit deadlines imposed by others, but not those imposed by myself?

Notice

We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.

One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.

In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.

By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.

In the past couple of years, Andy concocted a new internship scheme:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.

The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.

This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.

I’ll miss having this lot in the Clearleft office.

Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.

They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.

Playback at Clearleft

As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.

Chloe, Monika and Chris at dConstruct

The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.

There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:

Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!

Hmm… sounds like something that could enrich the lives of local residents.

Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.

That was the genesis of Notice. After that, they pulled out all the stops.

Exciting things are afoot with the @Clearleftintern project.

Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.

Monika and Chris, hacking

Last week marked the end of the project and the grand unveiling.

Playing with the @notice_city prototype. Chris breaks it down. Playback time. Unveiling.

They’ve done a great job. All the details are on the website, including this little note I wrote about the project:

This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-­to-­day dealings with web design.

We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.

We hereby declare this experiment a success!

Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.

Out on the streets of Brighton Prototype

As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.

I’m going to miss them.

The terrific trio!

My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.

register

From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'
  });
}

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.

install

I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {
  event.waitUntil(updateStaticCache());
});

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
};

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      cache.addAll([
        '/path/to/somefile',
        '/path/to/someotherfile'
      ]);
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
}

Another option is to use completely different caches, but I’ve decided to just use one cache for now.

activate

When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
  event.waitUntil(
    caches.keys()
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          })
          .map(function (key) {
            return caches.delete(key);
          })
        );
      })
  );
});

fetch

The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  ...
});

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
  );
  return;
}

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
          .catch(function () {
              return caches.match('/offline');
          })
  );
  return;
}

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:

event.respondWith(
  fetch(request)
);

If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);
})

So now the whole event.respondWith statement looks like this:

event.respondWith(
  fetch(request)
    .catch(function () {
      return caches.match(request)
    })
);

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');
  })

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');
  })

With that cleared up, my code for handing HTML requests looks like this:

event.respondWith(
  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');
        })
    })
);

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);
  });

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  event.respondWith(
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
          });
        return response;
      })
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');
          })
      })
  );
  return;
}

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:

event.respondWith(
  caches.match(request)
);

Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request);
    })
);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});
  }
})

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          }
        })
    })
);

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
    event.respondWith(
      ... 
    );
    return;
  }
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
      ...
    );
    return;
  }
  // Non-HTML requests
  event.respondWith(
    ...
  );
});

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:

fetch(request);

…is functionally equivalent to this:

fetch(request)
  .then(function (response) {
    return response;
  });

That’s probably obvious but it took me a while to realise. Likewise:

caches.match(request);

…is the same as:

caches.match(request)
  .then(function (response) {
    return response;
  });

Here’s another thing… you’ll notice that sometimes I’ve used:

fetch(request);

…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Updates

I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"
</FilesMatch>
Credentials

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:

fetch(request);
Scope

I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'
  });
}

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.