Journal tags: ba

125

sparkline

Photograph

Do you have a favourite non-personal photograph?

By non-personal, I mean one that isn’t directly related to your life; photographs of family members, friends, travel (remember travel?).

Even discounting those photographs, there’s still a vast pool of candidates. There are all the amazing pictures taken by photojournalists like Lee Miller. There’s all the awe-inspiring wildlife photography out there. Then there are the kind of posters that end up on bedroom walls, like Robert Doisneau’s The Kiss.

One of my favourite photographs of all time has music as its subject matter. No, not Johnny Cash flipping the bird, although I believe this picture to be just as rock’n’roll.

In the foreground, Séamus Ennis sits with his pipes. In the background, Jean Ritchie is leaning intently over her recording equipment.

This is a photograph of Séamus Ennis and Jean Ritchie. It was probably taken around 1952 or 1953 by Ritchie’s husband, George Pickow, when Jean Ritchie and Alan Lomax were in Ireland to do field recordings.

I love everything about it.

Séamus Ennis looks genuinely larger than life (which, by all accounts, he was). And just look at the length of those fingers! Meanwhile Jean Ritchie is equally indominatable, just as much as part of the story as the musician she’s there to record.

Both of them have expressions that convey how intent they are on their machines—Ennis’s uilleann pipes and Ritchie’s tape recorder. It’s positively steampunk!

What a perfect snapshot of tradition and technology meeting slap bang in the middle of the twentieth century.

Maybe that’s why I love it so much. One single photograph is filled with so much that’s dear to me—traditional Irish music meets long-term archival preservation.

Future Sync 2020

I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.

The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.

The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).

As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.

Here are those questions and answers…

The first few questions were about last years’s CERN project, which opens the talk:

Based on what you now know from the CERN 2019 WorldWideWeb Rebuild project—what would you have done differently if you had been part of the original 1989 Team?

I responded:

Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.

The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:

https://adactio.com/journal/16099

Next question:

How excited were you when you initially got the call for such an amazing project?

My predictable response:

It was an unbelievable privilege! I was so excited the whole time—I still can hardly believe it really happened!

https://adactio.com/journal/14803

https://adactio.com/journal/14821

Later in the presentation, I talked about service workers and progressive web apps. I got a technical question about that:

Is there a limit to the amount of local storage a PWA can use?

I answered:

Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.

But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.

More worrying is the announcement from Apple to only store files for a week of browser use:

https://adactio.com/journal/16619

Finally, there was a question about the over-arching theme of the talk…

Great talk, Jeremy. Do you encounter push-back when using the term “Progressive Enhancement”?

My response:

Yes! …And that’s why I never once used the phrase “progressive enhancement” in my talk. 🙂

There’s a lot of misunderstanding of the term. Rather than correct it, I now avoid it:

https://adactio.com/journal/9195

Instead of using the phrase “progressive enhancement”, I now talk about the benefits and effects of the technique: resilience, universality, etc.

Future Sync Distributed 2020

A reading of The Enormous Space by J.G. Ballard

Staying at home triggered a memory for me. I remembered reading a short story many years ago. It was by J.G. Ballard, and it described a man who makes the decision not to leave the house.

Being a J.G. Ballard story, it doesn’t end there. Over the course of the story, the house grows and grows in size, forcing the protaganist into ever-smaller refuges within his own home. It really stuck with me.

I tried tracking it down with some Duck Duck Going. Searching for “j.g. ballard weird short story” doesn’t exactly narrow things down, but eventually I spotted the book that I had read the story in. It was called War Fever. I think I read it back when I was living in Germany, so that would’ve been in the ’90s. I certainly don’t have a copy of the book any more.

But I was able to look up a table of contents and find a title for the story that was stuck in my head. It’s called The Enormous Space.

Alas, I couldn’t find any downloadable versions—War Fever doesn’t seem to be available for the Kindle.

Then I remembered the recent announcement from the Internet Archive that it was opening up the National Emergency Library. The usual limits on “checking out” books online are being waived while physical libraries remain closed.

I found The Complete Stories of J.G. Ballard and borrowed it just long enough to re-read The Enormous Space.

If anything, it’s creepier and weirder than I remembered. But it’s laced with more black comedy than I remembered.

I thought you might like to hear this story, so I made a recording of myself reading The Enormous Space.

Apple’s attack on service workers

Apple aren’t the best at developer relations. But, bad as their communications can be, I’m willing to cut them some slack. After all, they’re not used to talking with the developer community.

John Wilander wrote a blog post that starts with some excellent news: Full Third-Party Cookie Blocking and More. Safari is catching up to Firefox and disabling third-party cookies by default. Wonderful! I’ve had third-party cookies disabled for a few years now, and while something occassionally breaks, it’s honestly a pretty great experience all around. Denying companies the ability to track users across sites is A Good Thing.

In the same blog post, John said that client-side cookies will be capped to a seven-day lifespan, as previously announced. Just to be clear, this only applies to client-side cookies. If you’re setting a cookie on the server, using PHP or some other server-side language, it won’t be affected. So persistent logins are still doable.

Then, in an audacious example of burying the lede, towards the end of the blog post, John announces that a whole bunch of other client-side storage technologies will also be capped to seven days. Most of the technologies are APIs that, like cookies, can be used to store data: Indexed DB, Local Storage, and Session Storage (though there’s no mention of the Cache API). At the bottom of the list is this:

Service Worker registrations

Okay, let’s clear up a few things here (because they have been so poorly communicated in the blog post)…

The seven day timer refers to seven days of Safari usage, not seven calendar days (although, given how often most people use their phones, the two are probably interchangable). So if someone returns to your site within a seven day period of using Safari, the timer resets to zero, and your service worker gets a stay of execution. Lucky you.

This only applies to Safari. So if your site has been added to the home screen and your web app manifest has a value for the “display” property like “standalone” or “full screen”, the seven day timer doesn’t apply.

That piece of information was missing from the initial blog post. Since the blog post was updated to include this clarification, some people have taken this to mean that progressive web apps aren’t affected by the upcoming change. Not true. Only progressive web apps that have been added to the home screen (and that have an appropriate “display” value) will be spared. That’s a vanishingly small percentage of progressive web apps, especially on iOS. To add a site to the home screen on iOS, you need to dig and scroll through the share menu to find the right option. And you need to do this unprompted. There is no ambient badging in Safari to indicate that a site is installable. Chrome’s install banner isn’t perfect, but it’s better than nothing.

Just a reminder: a progressive web app is a website that

  • runs on HTTPS,
  • has a service worker,
  • and a web manifest.

Adding to the home screen is something you can do with a progressive web app (or any other website). It is not what defines progressive web apps.

In any case, this move to delete service workers after seven days of using Safari is very odd, and I’m struggling to find the connection to the rest of the blog post, which is about technologies that can store data.

As I understand it, with the crackdown on setting third-party cookies, trackers are moving to first-party technologies. So whereas in the past, a tracking company could tell its customers “Add this script element to your pages”, now they have to say “Add this script element and this script file to your pages.” That JavaScript file can then store a unique idenitifer on the client. This could be done with a cookie, with Local Storage, or with Indexed DB, for example. But I’m struggling to understand how a service worker script could be used in this way. I’d really like to see some examples of this actually happening.

The best explanation I can come up with for this move by Apple is that it feels like the neatest solution. That’s neat as in tidy, not as in nifty. It is definitely not a nifty solution.

If some technologies set by a specific domain are being purged after seven days, then the tidy thing to do is purge all technologies from that domain. Service workers are getting included in that dragnet.

Now, to be fair, browsers and operating systems are free to clean up storage space as they see fit. Caches, Local Storage, Indexed DB—all of those are subject to eventually getting cleaned up.

So I was curious. Wanting to give Apple the benefit of the doubt, I set about trying to find out how long service worker registrations currently last before getting deleted. Maybe this announcement of a seven day time limit would turn out to be not such a big change from current behaviour. Maybe currently service workers last for 90 days, or 60, or just 30.

Nope:

There was no time limit previously.

This is not a minor change. This is a crippling attack on service workers, a technology specifically designed to improve the user experience for return visits, whether it’s through improved performance or offline access.

I wouldn’t be so stunned had this announcement come with an accompanying feature that would allow Safari users to know when a website is a progressive web app that can be added to the home screen. But Safari continues to ignore the existence of progressive web apps. And now it will actively discourage people from using service workers.

If you’d like to give feedback on this ludicrous development, you can file a bug (down in the cellar in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”).

No doubt there will still be plenty of Apple apologists telling us why it’s good that Safari has wished service workers into the cornfield. But make no mistake. This is a terrible move by Apple.

I will say this though: given The Situation we’re all living in right now, some good ol’ fashioned Hot Drama by a browser vendor behaving badly feels almost comforting.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Install prompt

There’s an interesting thread on Github about the tongue-twistingly named beforeinstallpromt JavaScript event.

Let me back up…

Progressive web apps. You know what they are, right? They’re websites that have taken their vitamins. Specifically, they’re responsive websites that:

  1. are served over HTTPS,
  2. have a web app manifest, and
  3. have a service worker handling the offline scenario.

The web app manifest—a JSON file of metadata—is particularly useful for describing how your site should behave if someone adds it to their home screen. You can specify what icon should be used. You can specify whether the site should launch in a browser or as a standalone app (practically indistinguishable from a native app). You can specify which URL on the site should be used as the starting point when the site is launched from the home screen.

So progressive web apps work just fine when you visit them in a browser, but they really shine when you add them to your home screen. It seems like pretty much everyone is in agreement that adding a progressive web app to your home screen shouldn’t be an onerous task. But how does the browser let the user know that it might be a good idea to “install” the web site they’re looking at?

The Samsung Internet browser does ambient badging—a + symbol shows up to indicate that a website can be installed. This is a great approach!

I hope that Chrome on Android will also use ambient badging at some point. To start with though, Chrome notified users that a site was installable by popping up a notification at the bottom of the screen. I think these might be called “toasts”.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome. And there’s the “add to home screen” prompt for https://html5forwebdesigners.com/ HTTPS + manifest.json + Service Worker = “Add to Home Screen” prompt. Add to home screen.

Needless to say, the toast notification wasn’t very effective. That’s because we web designers and developers have spent years teaching people to immediately dismiss those notifications without even reading them. Accept our cookies! Sign up to our newsletter! Install our native app! Just about anything that’s user-hostile gets put in a notification (either a toast or an overlay) and shoved straight in the user’s face before they’ve even had time to start reading the content they came for in the first place. Users will then either:

  1. turn around and leave, or
  2. use muscle memory reach for that X in the corner of the notification.

A tiny fraction of users might actually click on the call to action, possibly by mistake.

Chrome didn’t abandon the toast notification for progressive web apps, but it did change when they would appear. Rather than the browser deciding when to show the prompt—usually when the user has just arrived on the site—a new JavaScript event called beforeinstallprompt can be used.

It’s a bit weird though. You have to “capture” the event that fires when the prompt would have normally been shown, subdue it, hold on to that event, and then re-release it when you think it should be shown (like when the user has completed a transaction, for example, and having your site on the home screen would genuinely be useful). That’s a lot of hoops. Here’s the code I use on The Session to only show the installation prompt to users who are logged in.

The end result is that the user is still shown a toast notification, but at least this time it’s the site owner who has decided when it will be shown. The Chrome team call this notification “the mini-info bar”, and Pete acknowledges that it’s not ideal:

The mini-infobar is an interim experience for Chrome on Android as we work towards creating a consistent experience across all platforms that includes an install button into the omnibox.

I think “an install button in the omnibox” means ambient badging in the browser interface, which would be great!

Anyway, back to that thread on Github. Basically, neither Apple nor Mozilla are going to implement the beforeinstallprompt event (well, technically Mozilla have implemented it but they’re not going to ship it). That’s fair enough. It’s an interim solution that’s not ideal for all reasons I’ve already covered.

But there’s a lot of pushback. Even if the details of beforeinstallprompt are troublesome, surely there should be some way for site owners to let users know that can—or should—install a progressive web app? As a site owner, I have a lot of sympathy for that viewpoint. But I also understand the security and usability issues that can arise from bad actors abusing this mechanism.

Still, I have to hand it to Chrome: even if we put the beforeinstallprompt event to one side, the browser still has a mechanism for letting users know that a progressive web app can be installed—the mini info bar. It’s not a great mechanism, but it’s better than nothing. Nothing is precisely what Firefox and Safari currently offer (though Firefox is experimenting with something).

In the case of Safari, not only do they not provide a mechanism for letting the user know that a site can be installed, but since the last iOS update, they’ve buried the “add to home screen” option even deeper in the “sharing sheet” (the list of options that comes up when you press the incomprehensible rectangle-with-arrow-emerging-from-it icon). You now have to scroll below the fold just to find the “add to home screen” option.

So while I totally get the misgivings about beforeinstallprompt, I feel that a constructive alternative wouldn’t go amiss.

And that’s all I have to say about that.

Except… there’s another interesting angle to that Github thread. There’s talk of allowing sites that are launched from the home screen to have access to more features than a site inside a web browser. Usually permissions on the web are explicitly granted or denied on a case-by-case basis: geolocation; notifications; camera access, etc. I think this is the first time I’ve heard of one action—adding to the home screen—being used as a proxy for implicitly granting more access. Very interesting. Although that idea seems to be roundly rejected here:

A key argument for using installation in this manner is that some APIs are simply so powerful that the drive-by web should not be able to ask for them. However, this document takes the position that installation alone as a restriction is undesirable.

Then again:

I understand that Chromium or Google may hold such a position but Apple’s WebKit team may not necessarily agree with such a position.

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

The Weight of the WWWorld is Up to Us by Patty Toland

It’s Patty Toland’s first time at An Event Apart! She’s from the fantabulous Filament Group. They’re dedicated to making the web work for everyone.

A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.

At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.

A month later there was a lot of coverage of Syrian refugees arriving in Europe. People were outraged to see refugees carrying smartphones as though that somehow showed they weren’t in a desperate situation. But smartphones are absolutely a necessity in that situation, and most of the phones were less expensive, lower-end devices. Refugeeinfo.eu was a useful site for people in crisis, but the navigation was designed to require JavaScript.

When people thing about mobile, they think about freedom and mobility. But with that JavaScript decision, the developers piled baggage on to the users.

There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.

Pew Research provided a lot of data back then that showed that this thinking was wrong. Use of cell phones, especially smartphones and tablets, escalated dramatically in the United States. There was a trend towards mobile-only usage. This was in low-income households—about one third of the population. Among 5,400 panelists, 15% did not have a JavaScript-enabled device.

Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.

There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.

At most 37% of people have unlimited data. Most people run out of data on a frequent basis.

Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.

This shows that network issues are definitely not just a third world challenge.

On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.

Are we helping in or dumping in? If you look at the data for the average web page size (2 megabytes), we are definitely dumping in. The size of third-party JavaScript has octupled.

There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.

Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.

In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.

What should we be doing?

Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.

Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.

Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.

Test on real devices that real people are using. See Alex Russell’s data on the differences between the kind of devices we use and typical low-end devices. We literally need to stop people in JavaScript.

Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.

If you have some power, then your job is to empower somebody else.

—Toni Morrison

Voice User Interface Design by Cheryl Platz

Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…

Why make a voice interface?

Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.

People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?

Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.

Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?

In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.

In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.

What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.

Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.

Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.

Making the case for voice interfaces

There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.

How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.

You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.

Getting started with voice interfaces

You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.

Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.

Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.

Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.

Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.

Here’s the general idea with “artificial intelligence”…

There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”

That seems simple enough, right? What’s so hard about designing for voice?

Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.

How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.

Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.

Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.

Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.

Delivering a voice interface

Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.

Flow diagrams communicate customer states, but don’t use the actual text in them.

Prompt lists are your final deliverable.

Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.

If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).

Eventually voice design will become a core competency, much like mobile, which was once separate.

Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.

May the voice be with you!

Passenger’s log, Queen Mary 2, August 2019

Passenger’s log, day one: Sunday, August 11, 2019

We took the surprisingly busy train from Brighton to Southampton, with our plentiful luggage in tow. As well as the clothes we’d need for three weeks of hot summer locations in the United States, Jessica and I were also carrying our glad rags for the shipboard frou-frou evenings.

Once the train arrived in Southampton, we transferred our many bags into the back of a taxi and made our way to the terminal. It looked like all the docks were occupied, either with cargo ships, cruise ships, or—in the case of the Queen Mary 2—the world’s last ocean liner to be built.

Check in. Security. Then it was time to bid farewell to dry land as we boarded the ship. We settled into our room—excuse me, stateroom—on the eighth deck. That’s the deck that also has the lifeboats, but our balcony is handily positioned between two boats, giving us a nice clear view.

We’d be sailing in a few hours, so that gave us plenty of time to explore the ship. We grabbed a suprisingly tasty bite to eat in the buffet restaurant, and then went out on deck (the promenade deck is deck seven, just one deck below our room).

It was a blustery day. All weekend, the UK newspaper headlines had been full of dramatic stories of high winds. Not exactly sailing weather. But the Queen Mary 2 is solid, sturdy, and just downright big, so once we were underway, the wind was hardly noticable …indoors. Out on the deck, it could get pretty breezy.

By pure coincidence, we happened to be sailing on a fortuituous day: the meeting of the queens. The Queen Elizabeth, the Queen Victoria, and the Queen Mary 2 were all departing Southampton at the same time. It was a veritable Cunard convoy. With the yacht race on as well, it was a very busy afternoon in the Solent.

We stayed out on the deck as our ship powered out of Southampton, and around the Isle of Wight, passing a refurbished Palmerston sea fort on the way.

Alas, Jessica had a migraine brewing all day, so we weren’t in the mood to dive into any social activities. We had a low-key dinner from the buffet—again, surprisingly tasty—and retired for the evening.

Passenger’s log, day two: Monday, August 12, 2019

Jessica’s migraine passed like a fog bank in the night, and we woke to a bright, blustery day. The Queen Mary 2 was just passing the Scilly Isles, marking the traditional start of an Atlantic crossing.

Breakfast was blissfully quiet and chilled out—we elected to try the somewhat less-trafficked Carinthia lounge; the location of a decent espresso-based coffee (for a price). Then it was time to feed our minds.

We watched a talk on the Bolshoi Ballet, filled with shocking tales of scandal. Here I am on holiday, and I’m sitting watching a presentation as though I were at a conference. The presenter in me approved of some of the stylistic choices: tasteful transitions in Keynote, and suitably legible typography for on-screen quotes.

Soon after that, there was a question-and-answer session with a dance teacher from the English National Ballet. We balanced out the arts with some science by taking a trip to the planetarium, where the dulcet voice of Neil De Grasse Tyson told the tale of dark matter. A malfunctioning projector somewhat tainted the experience, leaving a segment of the dome unilliminated.

It was a full morning of activities, but after lunch, there was just one time and place that mattered: sign ups for the week’s ballet workshops would take place at 3pm on deck two. We wandered by at 2pm, and there was already a line! Jessica quickly took her place in the queue, hoping that she’d make into the workshops, which have a capacity of just 30 people. The line continued to grow. The Cunard staff were clearly not prepared for the level of interest in these ballet workshops. They quickly introduced some emergency measures: this line would only be for the next two day’s workshops, rather than the whole week. So there’d be more queueing later in the week for anyone looking to take more than one workshop.

Anyway, the most important outcome was that Jessica did manage to sign up for a workshop. After all that standing in line, Jessica was ready for a nice sit down so we headed to the area designated for crafters and knitters. As Jessica worked on the knitting project she had brought along, we had our first proper social interactions of the voyage, getting to know the other makers. There was much bonding over the shared love of the excellent Ravelry website.

Next up: a pub quiz at sea in a pub at sea. I ordered the flight of craft beers and we put our heads together for twenty quickfire trivia questions. We came third.

After that, we rested up for a while in our room, before donning our glad rags for the evening’s gala dinner. I bought a tuxedo just for this trip, and now it was time to put it into action. Jessica donned a ballgown. We both looked the part for the black-and-white themed evening.

We headed out for pre-dinner drinks in the ballroom, complete with big band. At one entrance, there was a receiving line to meet the captain. Having had enough of queueing for one day, we went in the other entrance. With glasses of sparkling wine in hand, we surveyed our fellow dressed-up guests who were looking in equal measure dashingly cool and slightly uncomfortable.

After some amusing words from the captain, it was time for dinner. Having missed the proper sit-down dinner the evening before, this was our first time finding out what table we had. We were bracing ourselves for an evening of being sociable, chit-chatting with whoever we’ve been seated with. Your table assignment was the same for the whole week, so you’d better get on well with your tablemates. If you’re stuck with a bunch of obnoxious Brexiteers, tough luck; you just have to suck it up. Much like Brexit.

We were shown to our table, which was …a table for two! Oh, the relief! Even better, we were sitting quite close to the table of ballet dancers. From our table, Jessica could creepily stalk them, and observe them behaving just like mere mortals.

We settled in for a thoroughly enjoyable meal. I opted for an array of pale-coloured foods; cullen skink, followed by seared scallops, accompanied by a Chablis Premier Cru. All this while wearing a bow tie, to the sounds of a string quartet. It felt like peak Titanic.

After dinner, we had a nightcap in the elegant Chart Room bar before calling it a night.

Passenger’s log, day three: Tuesday, August 13

We were woken early by the ship’s horn. This wasn’t the seven-short-and-one-long blast that would signal an emergency. This was more like the sustained booming of a foghorn. In fact, it effectively was a foghorn, because we were in fog.

Below us was the undersea mountain range of the Maxwell Fracture Zone. Outside was a thick Atlantic fog. And inside, we were nursing some slightly sore heads from the previous evening’s intake of wine.

But as a nice bonus, we had an extra hour of sleep. As long as the ship is sailing west, the clocks get put back by an hour every night. Slowly but surely, we’ll get on New York time. Sure beats jetlag.

After a slow start, we sautered downstairs for some breakfast and a decent coffee. Then, to blow out the cobwebs, we walked a circuit of the promenade deck, thereby swapping out bed head for deck head.

It was then time for Jessica and I to briefly part ways. She went to watch the ballet dancers in their morning practice. I went to a lecture by Charlie Barclay from the Royal Astronomical Society, and most edifying it was too (I wonder if I can convince him to come down to give a talk at Brighton Astro sometime?).

After the lecture was done, I tracked down Jessica in the theatre, where she was enraptured by the dancers doing their company class. We stayed there as it segued into the dancers doing a dress rehearsal for their upcoming performance. It was fascinating, not least because it was clear that the dancers were having to cope with being on a slightly swaying moving vessel. That got me wondering: has ballet ever been performed on a ship before? For all I know, it might have been a common entertainment back in the golden age of ocean liners.

We slipped out of the dress rehearsal when hunger got the better of us, and we managed to grab a late lunch right before the buffet closed. After that, we decided it was time to check out the dog kennels up on the twelfth deck. There are 24 dogs travelling on the ship. They are all good dogs. We met Dillinger, a good dog on his way to a new life in Vancouver. Poor Dillinger was struggling with the circumstances of the voyage. But it’s better than being in the cargo hold of an airplane.

While we were up there on the top of the ship, we took a walk around the observation deck right above the bridge. The wind made that quite a tricky perambulation.

The rest of our day was quite relaxed. We did the pub quiz again. We got exactly the same score as we did the day before. We had a nice dinner, although this time a tuxedo was not required (but a jacket still was). Lamb for me; beef for Jessica; a bottle of Gigondas for both of us.

After dinner, we retired to our room, putting our clocks and watches back an hour before climbing into bed.

Passenger’s log, day four: Wednesday, August 14, 2019

After a good night’s sleep, we were sauntering towards breakfast when a ship’s announcement was made. This is unusual. Ship’s announcements usually happen at noon, when the captain gives us an update on the journey and our position.

This announcement was dance-related. Contradicting the listed 5pm time, sign-ups for the next ballet workshops would be happening at 9am …which was in 10 minutes time. Registration was on deck two. There we were, examining the breakfast options on deck seven. Cue a frantic rush down the stairwells and across the ship, not helped by me confusing our relative position to fore and aft. But we made it. Jessica got in line, and she was able to register for the workshop she wanted. Crisis averted.

We made our way back up to breakfast, and our daily dose of decent coffee. Then it was time for a lecture that was equally fascinating for me and Jessica. It was Physics En Pointe by Dr. Merritt Moore, ballet dancer and quantum physicist. This was a scene-setting talk, with her describing her life’s journey so far. She’ll be giving more talks throughout the voyage, so I’m hoping for some juicy tales of quantum entanglement (she works in quantum optics, generating entangled photons).

After that, it was time for Jessica’s first workshop. It was a general ballet technique workshop, and they weren’t messing around. I sat off to the side, with a view out on the middle of the Atlantic ocean, tinkering with some code for The Session, while Jessica and the other students were put through their paces.

Then it was time to briefly part ways again. While Jessica went to watch the ballet dancers doing their company class, I was once again attending a lecture by Charles Barclay of the Royal Astronomical Society. This time it was archaeoastronomy …or maybe it was astroarcheology. Either way, it was about how astronomical knowledge was passed on in pre-writing cultures, with a particular emphasis on neolithic sites like Avebury.

When the lecture was done, I rejoined Jessica and we watched the dancers finish their company class. Then it was time for lunch. We ate from the buffet, but deliberately avoided the heavier items, opting for a relatively light salad and sushi combo. This good deed would later be completely undone with a late afternoon cake snack.

We went to one more lecture. Three in one day! It really is like being at a conference. This one, by John Cooper, was on the Elizabethan settlers of Roanoke Island. So in one day, I managed to get a dose of history, science, and culture.

With the day’s workshops and lectures done, it was once again time to put on our best garb for the evening’s gala dinner. All tux’d up, I escorted Jessica downstairs. Tonight was the premier of the ballet performance. But before that, we wandered around drinking champagne and looking fabulous. I even sat at an otherwise empty blackjack table and promptly lost some money. I was a rubbish gambler, but—and this is important—I was a rubbish gambler wearing a tuxedo.

We got good seats for the ballet and settled in for an hour’s entertainment. There were six pieces, mostly classical. Some Swan Lake, some Nutcracker, and some Le Corsaire. But there was also something more modern in there—a magnificent performance from Akram Khan’s Dust. We had been to see Dust at Sadlers Wells, but I had forgotten quite how powerful it is.

After the performance, we had a quick cocktail, and then dinner. The sommelier is getting chattier and chattier with us each evening. I think he approves of our wine choices. This time, we left the vineyards of France, opting for a Pinot Noir from Central Otago.

After one or two nightcaps, we went back to our cabin and before crashing out, we set our clocks back an hour.

Passenger’s log, day five: Thursday, August 15, 2019

We woke to another foggy morning. The Queen Mary 2 was now sailing through the shallower waters of the Grand Banks of Newfoundland. Closer and closer to North America.

This would be my fifth day with virtually no internet access. I could buy WiFi internet access at exorbitant satellite prices, but I hadn’t felt any need to do that. I could also get a maritime mobile phone signal—very slow and very expensive.

I’ve been keeping my phone in airplane mode. Once a day, I connect to the mobile network and check just one website— thesession.org—just to make sure nothing’s on fire there. Fortunately, because I made the site, I know that the data transfer will be minimal. Each page of HTML is between 30K and 90K. There are no images to speak of. And because I’ve got the site’s service worker installed on my phone, I know that CSS and JavaScript is coming straight from a cache.

I’m not missing Twitter. I’m certainly not missing email. The only thing that took some getting used to was not being able to look things up. On the first few days of the crossing, both Jessica and I found ourselves reaching for our phones to look up something about ships or ballet or history …only to remember that we were enveloped in a fog of analogue ignorance, with no sign of terra firma digitalis.

It makes the daily quiz quite challenging. Every morning, twenty questions are listed on sheets of paper that appear at the entrance to the library. This library, by the way, is the largest at sea. As Jessica noted, you can tell a lot about the on-board priorities when the ship’s library is larger than the ship’s casino.

Answers to the quiz are to be handed in by 4pm. In the event of a tie, the team who hands in their answers earliest wins. You’re not supposed to use the internet, but you are positively encouraged to look up answers in the library. Jessica and I have been enjoying this old-fashioned investigative challenge.

With breakfast done before 9am, we had a good hour to spend in the library researching answers to the day’s quiz before Jessica needed to be at her 10am ballet workshop. Jessica got started with the research, but I quickly nipped downstairs to grab a couple of tickets for the planetarium show later that day.

Tickets for the planetarium shows are released every morning at 9am. I sauntered downstairs and arrived at the designated ticket-release location a few minutes before nine, where I waited for someone to put the tickets out. When no tickets appeared five minutes after nine, I wasn’t too worried. But when there were still no tickets at ten past nine, I grew concerned. By quarter past nine, I was getting a bit miffed. Had someone forgotten their planetarium ticket duties?

I found a crewmember at a nearby desk and asked if anyone was going to put out planetarium tickets. No, I was told. The tickets all went shortly after 9am. But I’ve been here since before 9am, I said! Then it dawned on me. The ship’s clocks didn’t go back last night after all. We just assumed they did, and dutifully changed our watches and phones accordingly.

Oh, crap—Jessica’s workshop! I raced back up five decks to the library where Jessica was perusing reference books at her leisure. I told her the bad news. We dashed down to the workshop ballroom anyway, but of course the class was now well underway. After all the frantic dashing and patient queueing that Jessica did yesterday to scure her place on the workshop! Our plans for the day were undone by our being too habitual with our timepieces. No ballet workshop. No planetarium show. I felt like such an idiot.

Well, we still had a full day of activities. There was a talk with ballet dancer, James Streeter (during which we found out that the captain had deployed all the ships stabilisers during the previous evening’s performance). We once again watched the ballet dancers doing their company class for an hour and a half. We went for afternoon tea, complete with string quartet and beautiful view out on the ocean, now mercifully free of fog.

We attended another astronomy lecture, this time on eclipses. But right before the lecture was about to begin, there was a ship-wide announcement. It wasn’t midday, so this had to be something unusual. The captain informed us that a passenger was seriously ill, and the Canadian coastguard was going to attempt a rescue. The ship was diverting closer to Newfoundland to get in helicopter range. The helicopter wouldn’t be landing, but instead attempting a tricky airlift in about twenty minutes time. And so we were told to literally clear the decks. I assume the rescue was successful, and I hope the patient recovers.

After that exciting interlude, things returned to normal. The lecture on eclipses was great, focusing in particular on the magificent 2017 solar eclipse across America.

It’s funny—Jessica and I are on this crossing because it was a fortunate convergence of ballet and being on a ship. And in 2017 we were in Sun Valley, Idaho because of a fortunate convergence of ballet and experiencing a total eclipse of the sun.

I’m starting to sense a theme here.

Anyway, after all the day’s dancing and talks were done, we sat down to dinner, where Jessica could once again surreptitiously spy on the dancers at a nearby table. We cemented our bond with the sommelier by ordering a bottle of the excellent Lebanese Château Musar.

When we got back to our room, there was a note waiting for us. It was an invitation for Jessica to take part in the next day’s ballet workshop! And, looking at the schedule for the next day, there was going to be repeats of the planetarium shows we missed today. All’s well that ends well.

Before going to bed, we did not set our clocks back.

Passenger’s log, day six: Friday, August 16, 2019

This morning was balletastic:

  • Jessica’s ballet workshop.
  • Watching the ballet dancers doing their company class.
  • Watching a rehearsal of the ballet performance.

The workshop was quite something. Jennie Harrington—who retired from dancing with Dust—took the 30 or so attendees through some of the moves from Akram Khan’s masterpiece. It looked great!

While all this was happening inside the ship, the weather outside was warming up. As we travel further south, the atmosphere is getting balmier. I spent an hour out on a deckchair, dozing and reading.

At one point, a large aircraft buzzed us—the Canadian coastguard perhaps? We can’t be that far from land. I think we’re still in international waters, but these waters have a Canadian accent.

After soaking up the salty sea air out on the bright deck, I entered the darkness of the planetarium, having successfully obtained tickets that morning by not having my watch on a different time to the rest of the ship.

That evening, there was a gala dinner with a 1920s theme. Jessica really looked the part—like a real flapper. I didn’t really make an effort. I just wore my tuxedo again. It was really fun wandering the ship and seeing all the ornate outfits, especially during the big band dance after dinner. I felt like I was in a photo on the wall of the Overlook Hotel.

Dressed for the 1920s.

Passenger’s log, day seven: Saturday, August 17, 2019

Today was the last full day of the voyage. Tomorrow we disembark.

We had a relaxed day, with the usual activities: a lecture or two; sitting in on the ballet company class.

Instead of getting a buffet lunch, we decided to do a sit-down lunch in the restaurant. That meant sitting at a table with other people, which could’ve been awkward, but turned out to be fine. But now that we’ve done the small talk, that’s probably all our social capital used up.

The main event today was always going to be the reprise and final performance from the English National Ballet. It was an afternoon performance this time. It was as good, if not better, the second time around. Bravo!

Best of all, after the performance, Jessica got to meet James Streeter and Erina Takahashi. Their performance from Dust was amazing, and we gushed with praise. They were very gracious and generous with their time. Needless to say, Jessica was very, very happy.

Shortly before the ballet performance, the captain made another unscheduled announcement. This time it was about a mechanical issue. There was a potential fault that needed to be investigated, which required stopping the ship for a while. Good news for the ballet dancers!

Jessica and I spent some time out on the deck while the ship was stopped. It’s was a lot warmer out there compared to just a day or two before. It was quite humid too—that’ll help us start to acclimatise for New York.

We could tell that we were getting closer to land. There are more ships on the horizon. From the amount of tankers we saw today, the ship must have passed close to a shipping lane.

We’re going to have a very early start tomorrow—although luckily the clocks will go back an hour again. So we did as much of our re-packing as we could this evening.

With the packing done, we still had some time to kill before dinner. We wandered over to the swanky Commodore Club cocktail bar at the fore of the ship. Our timing was perfect. There were two free seats positioned right by a window looking out onto the beautiful sunset we were sailing towards. The combination of ocean waves, gorgeous sunset, and very nice drinks ensured we were very relaxed when we made our way down to dinner.

Sailing into the sunset.

At the entrance of the dining hall—and at the entrance of any food-bearing establishment on board—there are automatic hand sanitiser dispensers. And just in case the automated solution isn’t enough, there’s also a person standing there with a bottle of hand sanitiser, catching your eye and just daring you to refuse an anti-bacterial benediction. As the line of smartly dressed guests enters the restaurant, this dutiful dispenser of cleanliness anoints the hands of each one; a priest of hygiene delivering a slightly sticky sacrament.

The paranoia is justified. A ship is a potential petri dish at sea. In my hometown of Cobh in Ireland, the old cemetery is filled with the bodies of foreign sailors whose ships were quarantined in the harbour at the first sign of cholera or smallpox. While those diseases aren’t likely to show up on the Queen Mary 2, if norovirus were to break out on the ship, it could potentially spread quickly. Hence the war on hand-based microbes.

Maybe it’s because I’ve just finished reading Ed Yong’s excellent book I contain multitudes, but I can’t help but wonder about our microbiomes on board this ship. Given enough time, would the microbiomes of the passengers begin to sync up? Maybe on a longer voyage, but this crossing almost certainly doesn’t afford enough time for gut synchronisation. This crossing is almost done.

Passenger’s log, day eight: Sunday, August 18, 2019

Jessica and I got up at 4:15am. This is an extremely unusual occurance for us. But we were about to experience something very out of the ordinary.

We dressed, looked unsuccessfully for coffee, and made our way on to the observation deck at the top of the ship. Land ho! The lights of New Jersey were shining off the port side of the ship. The lights of long island were shining off the starboard side. And dead ahead was the string of lights marking the Verrazano-Narrows Bridge.

The Queen Mary 2 was deliberately designed to pass under this bridge …just. The bridge has a clearance of 228 feet. The Queen Mary 2 is 236.2 feet, keel to funnel. That’s a difference of just 8.2 feet. Believe me, that doesn’t look like much when you’re on the top deck of the ship, standing right by the tallest mast.

The distant glow of New York was matched by the more localised glow of mobile phone screens on the deck. Passengers took photos constantly. Sometimes they took photos with flash, demonstrating a fundamental misunderstanding of how you photograph distant objects.

The distant object that everyone was taking pictures of was getting less and less distant. The Statue of Liberty was coming up on our port side.

I probably should’ve felt more of a stirring at the sight of this iconic harbour sculpture. The familiarity of its image might have dulled my appreciation. But not far from the statue was a dark area, one of the few pieces of land without lights. This was Ellis Island. If the Statue of Liberty was a symbol of welcome for your tired, your poor, your huddled masses yearning to breathe free, then Ellis Island was where the immigration rubber met the administrative road. This was where countless Irish migrants first entered the United States of America, bringing with them their songs, their stories, and their unhealthy appreciation for potatoes.

Before long, the sun was rising and the Queen Mary 2 was parallel parking at the Red Hook terminal in Brooklyn. We went back belowdecks and gathered our bags from our room. Rather than avail of baggage assistance—which would require us to wait a few hours before disembarking—we opted for “self help” dismembarkation. Shortly after 7am, our time on board the Queen Mary 2 was at an end. We were in the first group of passengers off the ship, and we sailed through customs and immigration.

Within moments of being back on dry land, we were in a cab heading for our hotel in Tribeca. The cab driver took us over the Brooklyn Bridge, explaining along the way how a cash payment would really be better for everyone in this arrangement. I didn’t have many American dollars, but after a bit of currency haggling, we agreed that I could give him the last of the Canadian dollars I had in my wallet from my recent trip to Vancouver. He’s got family in Canada, so this is a win-win situation.

It being a Sunday morning, there was no traffic to speak of. We were at our hotel in no time. I assumed we wouldn’t be able to check in for hours, but at least we’d be able to leave our bags there. I was pleasantly surprised when I was told that they had a room available! We checked in, dropped our bags, and promptly went in search of coffee and breakfast. We were tired, sure, but we had no jetlag. That felt good.

I connected to the hotel’s WiFi and went online for the first time in eight days. I had a lot of spam to delete, mostly about cryptocurrencies. I was back in the 21st century.

After a week at sea, where the empty horizon was visible in all directions, I was now in a teeming mass of human habitation where distant horizons are rare indeed. After New York, I’ll be heading to Saint Augustine in Florida, then Chicago, and finally Boston. My arrival into Manhattan marks the beginning of this two week American odyssey. But this also marks the end of my voyage from Southampton to New York, and with it, this passenger’s log.

Crossing

I’m going to America. But this time it’s going to be a bit different.

Here’s the backstory: I need to get to Chicago for An Event Apart in a couple of weeks. Jessica and I were talking about maybe going to Florida first to hang out with her family on the beach for a bit. We just needed to figure out the travel logistics.

Here’s the next variable to add in to the mix: Jessica is really into ballet. Like, really into ballet. She also likes boats, ships, and all things nautical.

Those two things are normally unrelated, but then a while back, Jessica tweeted this:

OMG @ENBallet on a SHIP crossing the ATLANTIC.

Dance the Atlantic 2019 Cruise

I chuckled at that, and almost immediately dismissed it as being something from another world. But then I looked at the dates, and wouldn’t you know it, it would work out perfectly for our planned travel to Florida and Chicago.

Sooo… we’re crossing the Atlantic ocean on the Queen Mary II. With ballet dancers.

It’s not a cruise. It’s a crossing:

The first rule about traveling between America and England aboard the Queen Mary 2, the flagship of the Cunard Line and the world’s largest ocean liner, is to never refer to your adventure as a cruise. You are, it is understood, making a crossing. The second rule is to refrain, when speaking to those who travel frequently on Cunard’s ships, from calling them regulars. The term of art — it is best pronounced while approximating Maggie Smith’s cut-glass accent on “Downton Abbey” — is Cunardists.

Because of the black-tie gala dinners taking place during the voyage, I am now the owner of tuxedo. I think all this dressing up is kind of like cosplay for the class system. This should be …interesting.

By all accounts, internet connectivity is non-existent on the crossing, so I’m going to be incommunicado. Don’t bother sending me any email—I won’t see it.

We sail from Southampton tomorrow. We arrive in New York a week later.

See you on the other side!

Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Movie Knight

I mentioned how much I enjoyed Mike Hill’s talk at Beyond Tellerrand in Düsseldorf:

Mike gave a talk called The Power of Metaphor and it’s absolutely brilliant. It covers the monomyth (the hero’s journey) and Jungian archetypes, illustrated with the examples Star Wars, The Dark Knight, and Jurassic Park.

At Clearleft, I’m planning to reprise the workshop I did a few years ago about narrative structure—very handy for anyone preparing a conference talk, blog post, case study, or anything really:

Ellen and I have been enjoying some great philosophical discussions about exactly what a story is, and how does it differ from a narrative structure, or a plot. I really love Ellen’s working definition: Narrative. In Space. Over Time.

This led me to think that there’s a lot that we can borrow from the world of storytelling—films, novels, fairy tales—not necessarily about the stories themselves, but the kind of narrative structures we could use to tell those stories. After all, the story itself is often the same one that’s been told time and time again—The Hero’s Journey, or some variation thereof.

I realised that Mike’s monomyth talk aligns nicely with my workshop. So I decided to prep my fellow Clearlefties for the workshop with a movie night.

Popcorn was popped, pizza was ordered, and comfy chairs were suitably arranged. Then we watched Mike’s talk. Everyone loved it. Then it was decision time. Which of three films covered in the talk would we watch? We put it to a vote.

It came out as an equal tie between Jurassic Park and The Dark Knight. How would we resolve this? A coin toss!

The toss went to The Dark Knight. In retrospect, a coin toss was a supremely fitting way to decide to watch that film.

It was fun to watch it again, particularly through the lens of Mike’s analyis of its Jungian archetypes.

But I still think the film is about game theory.

The trimCache function in Going Offline …again

It seems that some code that I wrote in Going Offline is haunted. It’s the trimCache function.

First, there was the issue of a typo. Or maybe it’s more of a brainfart than a typo, but either way, there’s a mistake in the syntax that was published in the book.

Now it turns out that there’s also a problem with my logic.

To recap, this is a function that takes two arguments: the name of a cache, and the maximum number of items that cache should hold.

function trimCache(cacheName, maxItems) {

First, we open up the cache:

caches.open(cacheName)
.then( cache => {

Then, we get the items (keys) in that cache:

cache.keys()
.then(keys => {

Now we compare the number of items (keys.length) to the maximum number of items allowed:

if (keys.length > maxItems) {

If there are too many items, delete the first item in the cache—that should be the oldest item:

cache.delete(keys[0])

And then run the function again:

.then(
    trimCache(cacheName, maxItems)
);

A-ha! See the problem?

Neither did I.

It turns out that, even though I’m using then, the function will be invoked immediately, instead of waiting until the first item has been deleted.

Trys helped me understand what was going on by making a useful analogy. You know when you use setTimeout, you can’t put a function—complete with parentheses—as the first argument?

window.setTimeout(doSomething(someValue), 1000);

In that example, doSomething(someValue) will be invoked immediately—not after 1000 milliseconds. Instead, you need to create an anonymous function like this:

window.setTimeout( function() {
    doSomething(someValue)
}, 1000);

Well, it’s the same in my trimCache function. Instead of this:

cache.delete(keys[0])
.then(
    trimCache(cacheName, maxItems)
);

I need to do this:

cache.delete(keys[0])
.then( function() {
    trimCache(cacheName, maxItems)
});

Or, if you prefer the more modern arrow function syntax:

cache.delete(keys[0])
.then( () => {
    trimCache(cacheName, maxItems)
});

Either way, I have to wrap the recursive function call in an anonymous function.

Here’s a gist with the updated trimCache function.

What’s annoying is that this mistake wasn’t throwing an error. Instead, it was causing a performance problem. I’m using this pattern right here on my own site, and whenever my cache of pages or images gets too big, the trimCaches function would get called …and then wouldn’t stop running.

I’m very glad that—witht the help of Trys at last week’s Homebrew Website Club Brighton—I was finally able to get to the bottom of this. If you’re using the trimCache function in your service worker, please update the code accordingly.

Management regrets the error.

Indie web events in Brighton

Homebrew Website Club is a regular gathering of people getting together to tinker on their own websites. It’s a play on the original Homebrew Computer Club from the ’70s. It shares a similar spirit of sharing and collaboration.

Homebrew Website Clubs happen at various locations: London, San Francisco, Portland, Nuremberg, and more. Usually there on every second Wednesday.

I started running Homebrew Website Club Brighton a while back. I tried the “every second Wednesday” thing, but it was tricky to make that work. People found it hard to keep track of which Wednesdays were Homebrew days and which weren’t. And if you missed one, then it would potentially be weeks between attending.

So I’ve made it a weekly gathering. On Thursdays. That’s mostly because Thursdays work for me: that’s one of the evenings when Jessica has her ballet class, so it’s the perfect time for me to spend a while in the company of fellow website owners.

If you’re in Brighton and you have your own website (or you want to have your own website), you should come along. It’s every Thursday from 6pm to 7:30pm ‘round at the Clearleft studio on 68 Middle Street. Add it to your calendar.

There might be a Thursday when I’m not around, but it’s highly likely that Homebrew Website Club Brighton will happen anyway because either Trys, Benjamin or Cassie will be here.

(I’m at Homebrew Website Club Brighton right now, writing this. Remy is here too, working on some very cool webmention stuff.)

There’s something else you should add to your calendar. We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

It’s been a while since we’ve had an Indie Web Camp in Brighton. You can catch up on the Brighton Indie Web Camps we had in 2014, 2015, and 2016. Since then I’ve been to Indie Web Camps in Berlin, Nuremberg, and Düsseldorf, but it’s going to be really nice to bring it back home.

Indie Web Camp UK attendees Indie Web Camp Brighton group photo IndieWebCampBrighton2016

The event will be free to attend, but I’ll set up an official ticket page on Ti.to to keep track of who’s coming. I’ll let you know when that’s up and ready. In the meantime, you can register your interest in attending on the 2019 Indie Webcamp Brighton page on the Indie Web wiki.

Inlining SVG background images in CSS with custom properties

Here’s a tiny lesson that I picked up from Trys that I’d like to share with you…

I was working on some upcoming changes to the Clearleft site recently. One particular component needed some SVG background images. I decided I’d inline the SVGs in the CSS to avoid extra network requests. It’s pretty straightforward:

.myComponent {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

You can basically paste your SVG in there, although you need to a little bit of URL encoding: I found that converting # to %23 to was enough for my needs.

But here’s the thing. My component had some variations. One of the variations had multiple background images. There was a second background image in addition to the first. There’s no way in CSS to add an additional background image without writing a whole background-image declaration:

.myComponent--variant {
    background-image: url('data:image/svg+xml;utf8,<svg> ... </svg>'), url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

So now I’ve got the same SVG source inlined in two places. That negates any performance benefits I was getting from inlining in the first place.

That’s where Trys comes in. He shared a nifty technique he uses in this exact situation: put the SVG source into a custom property!

:root {
    --firstSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
    --secondSVG: url('data:image/svg+xml;utf8,<svg> ... </svg>');
}

Then you can reference those in your background-image declarations:

.myComponent {
    background-image: var(--firstSVG);
}
.myComponent--variant {
    background-image: var(--firstSVG), var(--secondSVG);
}

Brilliant! Not only does this remove any duplication of the SVG source, it also makes your CSS nice and readable: no more big blobs of SVG source code in the middle of your style sheet.

You might be wondering what will happen in older browsers that don’t support CSS custom properties (that would be Internet Explorer 11). Those browsers won’t get any background image. Which is fine. It’s a background image. Therefore it’s decoration. If it were an important image, it wouldn’t be in the background.

Progressive enhancement, innit?

Drag’n’drop revisited

I got a message from a screen-reader user of The Session recently, letting me know of a problem they were having. I love getting any kind of feedback around accessibility, so this was like gold dust to me.

They pointed out that the drag’n’drop interface for rearranging the order of tunes in a set was inaccessible.

Drag and drop

Of course! I slapped my forehead. How could I have missed this?

It had been a while since I had implemented that functionality, so before even looking at the existing code, I started to think about how I could improve the situation. Maybe I could capture keystroke events from the arrow keys and announce changes via ARIA values? That sounded a bit heavy-handed though: mess with people’s native keyboard functionality at your peril.

Then I looked at the code. That was when I realised that the fix was going to be much, much easier than I thought.

I documented my process of adding the drag’n’drop functionality back in 2016. Past me had his progressive enhancement hat on:

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

The problem was in my feature detection:

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled.

The logic was fine, but the execution was flawed. I was being lazy and hiding the select elements with display: none. That hides them visually, but it also hides them from screen readers. I swapped out that style declaration for one that visually hides the elements, but keeps them accessible and focusable.

It was a very quick fix. I had the odd sensation of wanting to thank Past Me for making things easy for Present Me. But I don’t want to talk about time travel because if we start talking about it then we’re going to be here all day talking about it, making diagrams with straws.

I pushed the fix, told the screen-reader user who originally contacted me, and got a reply back saying that everything was working great now. Success!