Journal tags: os

121

sparkline

Accent all areas

Whenever a new version of Chrome comes out, there’s an accompanying blog post listing what’s new. Chrome 93 just came out and, sure enough, Pete has written a blog post about it.

But what I think is the most exciting addition to the browser isn’t listed.

What is this feature that’s got me so excited?

Okay, I’ve probably oversold it now because actually, it looks like a rather small trivial addition. It’s the accent-color property in CSS.

Up until now, accent colour was controlled by the operating system. If you’re on a Mac, go to “System Preferences” and then “General”. There you’ll see an option to change your accent colour. Try picking a different colour. You’ll see that change cascade down into the other form fields in that preference pane: checkboxes, radio buttons, and dropdowns.

Your choice will also cascade down into web pages. Any web page that uses native checkboxes, radio buttons and other interface elements will inherit that colour.

This is how interface elements are supposed to work. The browser inherits the look’n’feel of the inputs from the operating system.

That’s the theory anyway. In practice, form elements—such as dropdowns—can look different from browser to browser, something that shouldn’t be happening if the browsers are all inheriting from the operating system.

Anyway, it’s probably this supposed separation of responsibility between browser and operating system which has led to the current situation with form fields and CSS. Authors can style form fields up to a point, but there’s always a line that you don’t get to cross.

The accent colour of a selected radio button or a checkbox has historically been on the other side of that line. You either had to accept that you couldn’t change the colour, or you had to make your own checkbox or radio button interface. You could use CSS to hide the native element and replace it with an image instead.

That feels a bit over-engineered and frankly kind of hacky. It reminds me of the bad old days of image replacement for text before we had web fonts.

Now, with the accent-color property in CSS, authors can over-ride the choice that the user has set at the operating system level.

On the one hand, this doesn’t feel great to me. Who are we to make that decision? Shouldn’t the user’s choice take primacy over our choices?

But then again, where do we draw the line? We’re allowed over-ride link colours. We’re allowed over-ride font choices.

Ultimately I think it’s a good thing that authors can now specify an accent colour. What makes me think that is the behaviour that authors have shown if they don’t have this ability—they do it anyway, and in a hackier manner. This is why I think the work of the Open UI group is so important. If developers don’t get a standardised way to customise native form controls, they’ll just recreate their own over-engineered versions.

The purpose of Open UI to the web platform is to allow web developers to style and extend built-in web UI controls, such as select dropdowns, checkboxes, radio buttons, and date/color pickers.

Trying to stop developers from styling checkboxes and radio buttons is like trying to stop teenagers from having sex. You might as well accept that it’s going to happen and give them contraception so they can at least do it safely.

So I welcome this new CSS condom.

You can see accent-color in action in this demo. Change the value of the accent-color property to see the form fields update:

:root {
  accent-color: rebeccapurple;
}

Applying it at the document level like that will make it universal, but you can also use the property on an element-by-element basis using whatever selector you want.

That demo works in Chrome and Edge 93, the current release. It also works in Firefox 92, which literally just landed (like as I was writing this blog post, support for accent-color magically arrived!).

As for Safari, well, who knows? If Apple published a roadmap, then developers would have a clue when to expect a property like this to land. But we mere mortals cannot be trusted with such important hush-hush information.

In the meantime, keep an eye on Can I Use. And lack of support on one browser is no reason not to use accent-color anyway. It’s a progressive enhancement. Add it to your CSS today and it will work in more browsers in the future.

Browsers

I mentioned recently that there might be quite a difference in tone between my links and my journal here on my website:

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

My journal entries have been even more specifically negative of late. I’ve been bitchin’ and moanin’ about web browsers. But at least I’m an equal-opportunities bitcher and moaner.

I wish my journal weren’t so negative, but my mithering behaviour has been been encouraged. On more than one occasion, someone I know at a browser company has taken me aside to let me know that I should blog about any complaints I might have with their browser. It sounds counterintuitive, I know. But these blog posts can give engineers some ammunition to get those issues prioritised and fixed.

So my message to you is this: if there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it.

Publish it on your website. You could post your gripes on Twitter but whinging on Jack’s website is just pissing in the wind. And I suspect you also might put a bit more thought into a blog post on your own site.

I know it’s a cliché to say that browser makers want to hear from developers—and I’m often cynical about it myself—but they really do want to know what we think. Share your thoughts. I’ll probably end up linking to what you write.

Hosting online events

Back in 2014 Vitaly asked me if I’d be the host for Smashing Conference in Freiburg. I jumped at the chance. I thought it would be an easy gig. All of the advantages of speaking at a conference without the troublesome need to actually give a talk.

As it turned out, it was quite a bit of work:

It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered.

Last month, I hosted an other event, but this time it was online: UX Fest. Doing the post-talk interviews was definitely a little weirder online. It’s not quite the same as literally sitting down with someone. But the online nature of the event did provide one big advantage…

To minimise technical hitches on the day, and to ensure that the talks were properly captioned, all the speakers recorded their talks ahead of time. That meant I had an opportunity to get a sneak peek at the talks and prepare questions accordingly.

UX Fest had a day of talks every Thursday in June. There were four talks per Thursday. I started prepping on the Monday.

First of all, I just watched all the talks and let them wash me over. At this point, I’d often think “I’m not sure if I can come up with any questions for this one!” but I’d let the talks sit there in my subsconscious for a while. This was also a time to let connections between talks bubble up.

Then on the Tuesday and Wednesday, I went through the talks more methodically, pausing the video every time I thought of a possible question. After a few rounds of this, I inevitably ended up with plenty of questions, some better than others. So I then re-ordered them in descending levels of quality. That way if I didn’t get to the questions at the bottom of the list, it was no great loss.

In theory, I might not get to any of my questions. That’s because attendees could also ask questions on the day via a chat window. I prioritised those questions over my own. Because it’s not about me.

On some days there was a good mix of audience questions and my own pre-prepared questions. On other days it was mostly my own questions.

Either way, it was important that I didn’t treat the interview like a laundry list of questions to get through. It was meant to be a conversation. So the answer to one question might touch on something that I had made a note of further down the list, in which case I’d run with that. Or the conversation might go in a really interesting direction completely unrelated to the questions or indeed the talk.

Above all, these segments needed to be engaging and entertaining in a personable way, more like a chat show than a post-game press conference. So even though I had done lots of prep for interviewing each speaker, I didn’t want to show my homework. I wanted each interview to feel like a natural flow.

To quote the old saw, this kind of spontaneity takes years of practice.

There was an added complication when two speakers shared an interview slot for a joint Q&A. Not only did I have to think of questions for each speaker, I also had to think of questions that would work for both speakers. And I had to keep track of how much time each person was speaking so that the chat wasn’t dominated by one person more than the other. This was very much like moderating a panel, something that I enjoy very much.

In the end, all of the prep paid off. The conversations flowed smoothly and I was happy with some of the more thought-provoking questions that I had researched ahead of time. The speakers seemed happy too.

Y’know, there are not many things I’m really good at. I’m a mediocre developer, and an even worse designer. I’m okay at writing. But I’m really good at public speaking. And I think I’m pretty darn good at this hosting lark too.

Safari 15

If you download Safari Technology Preview you can test drive features that are on their way in Safari 15. One of those features, announced at Apple’s World Wide Developer Conference, is coloured browser chrome via support for the meta value of “theme-color.” Chrome on Android has supported this for a while but I believe Safari is the first desktop browser to add support. They’ve also added support for the media attribute on that meta element to handle “prefers-color-scheme.”

This is all very welcome, although it does remind me a bit of when Internet Explorer came out with the ability to make coloured scrollbars. I mean, they’re nice features’n’all, but maybe not the most pressing? Safari is still refusing to acknowledge progressive web apps.

That’s not quite true. In her WWDC video Jen demonstrates how you can add a progressive web app like Resilient Web Design to your home screen. I’m chuffed that my little web book made an appearance, but when you see how you add a site to your home screen in iOS, it’s somewhat depressing.

The steps to add a website to your home screen are:

  1. Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
  2. A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
  3. Now you must find “add to home screen” in the list
  • Copy
  • Add to Reading List
  • Add Bookmark
  • Add to Favourites
  • Find on Page
  • Add to Home Screen
  • Markup
  • Print

It reminds of this exchange in The Hitchhiker’s Guide To The Galaxy:

“You hadn’t exactly gone out of your way to call attention to them had you? I mean like actually telling anyone or anything.”

“But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a torch.”

“Ah, well the lights had probably gone.”

“So had the stairs.”

“But look you found the notice didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of The Leopard.’”

Safari’s current “support” for adding progressive web apps to the home screen feels like the minimum possible …just enough to use it as a legal argument if you happen to be litigated against for having a monopoly on app distribution. “Hey, you can always make a web app!” It’s true in theory. In practice it’s …suboptimal, to put it mildly.

Still, those coloured tab bars are very nice.

It’s a little bit weird that this stylistic information is handled by HTML rather than CSS. It’s similar to the meta viewport value in that sense. I always that the plan was to migrate that to CSS at some point, but here we are a decade later and it’s still very much part of our boilerplate markup.

Some people have remarked that the coloured browser chrome can make the URL bar look like part of the site so people might expect it to operate like a site-specific search.

I also wonder if it might blur “the line of death”; that point in the UI where the browser chrome ends and the website begins. Does the unified colour make it easier to spoof browser UI?

Probably not. You can already kind of spoof browser UI by using the right shade of grey. Although the removal any kind of actual line in Safari does give me pause for thought.

I tend not to think of security implications like this by default. My first thought tends to be more about how I can use the feature. It’s only after a while that I think about how bad actors might abuse the same feature. I should probably try to narrow the gap between those thoughts.

Notified

I got a notification on my phone on Monday.

For most people this would be an unremarkable occurence. For me it’s quite unusual. I’ve written before about my relationship with notifications:

If I install an app on my phone, the first thing I do is switch off all notifications. That saves battery life and sanity.

The only time my phone is allowed to ask for my attention is for phone calls, SMS, or FaceTime (all rare occurrences). I initiate every other interaction—Twitter, Instagram, Foursquare, the web. My phone is a tool that I control, not the other way around.

In short, I allow notifications from humans but never from machines. I am sometimes horrified when I see other people’s phones lighting up with notifications about email, tweets, or—God help us—news stories. I try not to be judgemental, but honestly, how does anyone live like that?

The next version of iOS will feature focus modes allowing you to toggle when certain notifications are allowed. That’s a welcome addition, but it’s kind of horrifying that it’s even necessary. It’s like announcing a new padded helmet that will help reduce the pain the next time you choose to hit your own head with a hammer. It doesn’t really address the underlying problem.

Anyway, I made an exception to my rule about not allowing notifications from non-humans. Given the exceptional circumstances of The Situation, I have allowed notifications from the NHS COVID-19 app.

And that’s why I got a notification on my phone on Monday.

It said that I had come into contact with someone who tested positive for COVID-19 and that I would need to self-isolate until midnight on Friday. I haven’t been out of the house much at all—and never indoors—so I think it must be because I checked into a seafront bar last week for an outdoor drink; the QR code for the venue would’ve encompased both the indoor and outdoor areas.

Even though it wasn’t part of the advice for self-isolation, I got a lateral flow test to see if I was actually infected.

I did the test and I can confidentally say that I would very much like to never repeat that experience.

The test was negative. But I’m still going to stick to the instructions I’ve been given. In fact, that’s probably why testing isn’t part of the recommended advice; I can imagine a lot of people getting a negative test and saying, “I’m sure I’m fine—I don’t need to self-isolate.”

So I won’t be leaving the house until Saturday. This is not a great inconvenience. It’s not like I had many plans. But…

This is why, for the final day of UX Fest, I will be performing my hosting duties from the comfort of my own home instead of the swankier, more professional environment of the Clearleft studio. I hope I don’t bring the tone down too much.

I also had to turn down an invitation to play some tunes with two fully vaccinated fellow musicians on Friday. It felt a bit strange to explain why. “I’d love to, but my phone says I have to stay in.”

I feel like I’m in that Bruce Sterling short story Maneki Neko, obedientally taking orders from my pokkecon.

Weighing up UX

You can listen to an audio version of Weighing up UX.

This is the month of UX Fest 2021—this year’s online version of UX London. The festival continues with masterclasses every Tuesday in June and a festival day of talks every Thursday (tickets for both are still available). But it all kicked off with the conference part last week: three back-to-back days of talks.

I have the great pleasure of hosting the event so not only do I get to see a whole lot of great talks, I also get to quiz the speakers afterwards.

Right from day one, a theme emerged that continued throughout the conference and I suspect will continue for the rest of the festival too. That topic was metrics. Kind of.

See, metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.

People have tried to quantify user experience benefits using measurements like NetPromoter Score, which is about as useful as reading tea leaves or chicken entrails.

So we tend to equate user experience gains with business gains. That makes sense. Happy users should be good for business. That’s a reasonable hypothesis. But it gets tricky when you need to make the case for improving the user experience if you can’t tie it directly to some business metric. That’s when we run into the McNamara fallacy:

Making a decision based solely on quantitative observations (or metrics) and ignoring all others.

The way out of this quantitative blind spot is to use qualitative research. But another theme of UX Fest was just how woefully under-represented researchers are in most organisations. And even when you’ve gone and talked to users and you’ve got their stories, you still need to play that back in a way that makes sense to the business folks. These are stories. They don’t lend themselves to being converted into charts’n’graphs.

And so we tend to fall back on more traditional metrics, based on that assumption that what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.

Broadly speaking, the talks at UX Fest could be put into two categories. You’ve got talks covering practical subjects like product design, content design, research, growth design, and so on. Then you’ve got the higher-level, almost philosophical talks looking at the big picture and questioning the industry’s direction of travel.

The tension between these two categories was the highlight of the conference for me. It worked particularly well when there were back-to-back talks (and joint Q&A) featuring a hands-on case study that successfully pushed the needle on business metrics followed by a more cautionary talk asking whether our priorities are out of whack.

For example, there was a case study on growth design, which emphasised the importance of A/B testing for validation, immediately followed by a talk on deceptive dark patterns. Now, I suspect that if you were to A/B test a deceptive dark pattern, the test would validate its use (at least in the short term). It’s no coincidence that a company like Booking.com, which lives by the A/B sword, is also one of the companies sued for using distressing design patterns.

Using A/B tests alone is like using a loaded weapon without supervision. They only tell you what people do. And again, the solution is to make sure you’re also doing qualitative research—that’s how you find out why people are doing what they do.

But as I’ve pondered the lessons from last week’s conference, I’ve come to realise that there’s also a danger of focusing purely on the user experience. Hear me out…

At one point, the question came up as to whether deceptive dark patterns were ever justified. What if it’s for a good cause? What if the deceptive dark pattern is being used by an organisation actively campaigning to do good in the world?

In my mind, there was no question. A deceptive dark pattern is wrong, no matter who’s doing it.

(There’s also the problem of organisations that think they’re doing good in the world: I’m sure that every talented engineer that worked on Google AMP honestly believed they were acting in the best interests of the open web even as they worked to destroy it.)

Where it gets interesting is when you flip the question around.

Suppose you’re a designer working at an organisation that is decidedly not a force for good in the world. Say you’re working at Facebook, a company that prioritises data-gathering and engagement so much that they’ll tolerate insurrectionists and even genocidal movements. Now let’s say there’s talk in your department of implementing a deceptive dark pattern that will drive user engagement. But you, being a good designer who fights for the user, take a stand against this and you successfully find a way to ensure that Facebook doesn’t deploy that deceptive dark pattern.

Yay?

Does that count as being a good user experience designer? Yes, you’ve done good work at the coalface. But the overall business goal is like a deceptive dark pattern that’s so big you can’t take it in. Is it even possible to do “good” design when you’re inside the belly of that beast?

Facebook is a relatively straightforward case. Anyone who’s still working at Facebook can’t claim ignorance. They know full well where that company’s priorities lie. No doubt they sleep at night by convincing themselves they can accomplish more from the inside than without. But what about companies that exist in the grey area of being imperfect? Frankly, what about any company that relies on surveillance capitalism for its success? Is it still possible to do “good” design there?

There are no easy answers and that’s why it so often comes down to individual choice. I know many designers who wouldn’t work at certain companies …but they also wouldn’t judge anyone else who chooses to work at those companies.

At Clearleft, every staff member has two levels of veto on client work. You can say “I’m not comfortable working on this”, in which case, the work may still happen but we’ll make sure the resourcing works out so you don’t have anything to do with that project. Or you can say “I’m not comfortable with Clearleft working on this”, in which case the work won’t go ahead (this usually happens before we even get to the pitching stage although there have been one or two examples over the years where we’ve pulled out of the running for certain projects).

Going back to the question of whether it’s ever okay to use a deceptive dark pattern, here’s what I think…

It makes no difference whether it’s implemented by ProPublica or Breitbart; using a deceptive dark pattern is wrong.

But there is a world of difference in being a designer who works at ProPublica and being a designer who works at Breitbart.

That’s what I’m getting at when I say there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals. Then designers are like the soldiers on the eve of battle in Henry V:

For we know enough, if we know we are the kings subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us.

Two decades of thesession.org

On June 3rd, 2001, I launched thesession.org. Happy twentieth birthday to The Session!

Although actually The Session predates its domain name by a few years. It originally launched as a subdirectory here on adactio.com with the unwieldly URL /session/session.shtml

A screenshot of the first version of The Session

That incarnation was more like a blog. I’d post the sheetmusic for a tune every week with a little bit of commentary. That worked fine until I started to run out of tunes. That’s when I made the site dynamic. People could sign up to become members of The Session. Then they could also post tunes and add comments.

A screenshot of the second version of The Session

That’s the version that is two decades old today.

The last really big change to the site happened in 2012. As well as a complete redesign, I introduced lots of new functionality.

A screenshot of the current version of The Session

In all of those incarnations, the layout was fluid …long before responsive design swept the web. That was quite unusual twenty years ago, but I knew it was the webby thing to do.

What’s also unusual is just keeping a website going for twenty years. Keeping a community website going for twenty years is practically unheard of. I’m very proud of The Session. Although, really, I’m just the caretaker. The site would literally be nothing without all the contributions that people have made.

I’ve more or less adopted a Wikipedia model for contributions. Some things, like tune settings, can only be edited by the person who submitted it But other things, like the track listing of a recording, or the details of a session, can be edited by any member of the site. And of course anyone can add a comment to any listing. There’s a certain amount of risk to that, but after testing it for two decades, it’s working out very nicely.

What’s really nice is when I get to meet my fellow members of The Session in meatspace. If I’m travelling somewhere and there’s a local session happening, I always get a warm welcome. I mean, presumably everyone would get a warm welcome at those sessions, but I’ve also had my fair share of free pints thanks to The Session.

I feel a great sense of responsibility with The Session. But it’s not a weight of responsibility—the way that many open source maintainers describe how their unpaid labour feels. The sense of responsibility I feel drives me. It gives me a sense of purpose.

The Session is older than any client work I’ve ever done. It’s older than any books I’ve written. It’s even older than Clearleft by a few years. Heck, it’s even older than this blog (just).

I’m 50 years old now. The Session is 20 years old. That’s quite a chunk of my life. I think it’s fair to say that it’s part of me now. Of all the things I’ve made so far in my life, The Session is the one I’m proudest of.

I’m looking forward to stewarding the site through the next twenty years.

Hosting UX Fest

I quite enjoy interviewing people. I don’t mean job interviews. I mean, like, talk show interviews. I’ve had a lot of fun over the years moderating panel discussions: @media Ajax in 2007, SxSW in 2008, Mobilism in 2011, the Progressive Web App Dev Summit and EnhanceConf in 2016.

I’ve even got transcripts of some panels I’ve moderated:

I enjoyed each and every one. I also had the pleasure of interviewing the speakers at every Responsive Day Out. Hosting events like that is a blast, but what with The Situation and all, there hasn’t been much opportunity for hosting conferences.

Well, I’m going to be hosting an event next month: UX Fest. It’s this year’s online version of UX London.

An online celebration of digital design, taking place throughout June 2021.

I am simultaneously excited and nervous. I’m excited because I’ll have the chance to interview a whole bunch of really smart people. I’m nervous because it’s all happening online and that might feel quite different to an in-person discussion.

But I have an advantage. While the interviews will be live, the preceding talks will be pre-recorded. That means I have to time watch and rewatch each talk, spot connections between them, and think about thought-provoking questions for each speaker.

So that’s what I’m doing between now and the beginning of June. If you’d like to bear witness to the final results, I encourage you to get a ticket for UX Fest. You can come to the three-day conference in the first week of June, or you can get a ticket for the festival spread out over the following three Thursdays in June, or you can get a combo ticket for both and save some money.

There’s an inclusion programme for the conference and festival days:

Anyone from an underrepresented group is invited to apply. We especially invite and welcome Black, indigenous & people of colour, LGBTQIA+ people and people with disabilities.

Here’s the application form.

There’ll also be a whole bunch of hands-on masterclasses throughout June that you can book individually. I won’t be hosting those though. I’ll have plenty to keep me occupied hosting the conference and the festival.

I hope you’ll join me along with Krystal Higgins, David Dylan Thomas, Catt Small, Scott Kubie, Temi Adeniyi, Teresa Torres, Tobias Ahlin and many more wonderful speakers—it’s going to fun!

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Web Audio API weirdness on iOS

I told you about how I’m using the Web Audio API on The Session to generate synthesised audio of each tune setting. I also said:

Except for some weirdness on iOS that I had to fix.

Here’s that weirdness…

Let me start by saying that this isn’t anything to do with requiring a user interaction (the Web Audio API insists on some kind of user interaction to prevent developers from having auto-playing sound on websites). All of my code related to the Web Audio API is inside a click event handler. This is a different kind of weirdness.

First of all, I noticed that if you pressed play on the audio player when your iOS device is on mute, then you don’t hear any audio. Seems logical, right? Except if using the same device, still set to mute, you press play on a video or audio element, the sound plays just fine. You can confirm this by going to Huffduffer and pressing play on any of the audio elements there, even when your iOS device is set on mute.

So it seems that iOS has different criteria for the Web Audio API than it does for audio or video. Except it isn’t quite that straightforward.

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

Following my theory that the browser needs a “kick” to get into the right frame of mind for the Web Audio API, I resorted to a messy little hack.

In the event handler for the audio player, I generate the “kick” by playing a second of silence using the JavaScript equivalent of the audio element:

var audio = new Audio('1-second-of-silence.mp3');
audio.play();

I’m not proud of that. It’s so hacky that I’ve even wrapped the code in some user-agent sniffing on the server, and I never do user-agent sniffing!

Still, if you ever find yourself getting weird but inconsistent behaviour on iOS using the Web Audio API, this nasty little hack could help.

The Correct Material

I’ve been watching The Right Stuff on Disney Plus. It’s a modern remake of the ’80s film of the ’70s Tom Wolfe book of ’60s events.

It’s okay. The main challenge, as a viewer, is keeping track of which of the seven homogenous white guys is which. It’s like Merry, Pippin, Ant, Dec, and then some.

It’s kind of fun watching it after watching For All Mankind which has some of the same characters following a different counterfactual history.

The story being told is interesting enough (although Tom has pointed out that removing the Chuck Yeager angle really diminishes the narrative). But ultimately the tension is manufactured around a single event—the launch of Freedom 7—that was very much in the shadow of Gargarin’s historic Vostok 1 flight.

There are juicier stories to be told, but those stories come from Russia.

Some of these stories have been told in film. The Spacewalker told the amazing story of Alexei Leonov’s mission, though it messes with the truth about what happened with the landing and recovery—a real shame, considering that the true story is remarkable enough.

Imagine an alternative to The Right Stuff that relayed the drama of Soyuz 1—it’s got everything: friendship, rivalries, politics, tragedy…

I’d watch the heck out of that.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Standards processing

I’ve been like a dog with a bone the way I’ve been pushing for a declarative option for the Web Share API in the shape of button type=“share”. It’s been an interesting window into the world of web standards.

The story so far…

That’s the situation currently. The general consensus seems to be that it’s probably too soon to be talking about implementation at this stage—the Web Share API itself is still pretty new—but gathering data to inform future work is good.

In planning for the next TPAC meeting (the big web standards gathering), Marcos summarised the situation like this:

Not blocking: but a proposal was made by @adactio to come up with a declarative solution, but at least two implementers have said that now is not the appropriate time to add such a thing to the spec (we need more implementation experience + and also to see how devs use the API) - but it would be great to see a proposal incubated at the WICG.

Now this where things can get a little confusing because it used to be that if you wanted to incubate a proposal, you’d have to do on Discourse, which is a steaming pile of crap that requires JavaScript in order to put text on a screen. But Šime pointed out that proposals can now be submitted on Github.

So that’s where I’ve submitted my proposal, linking through to the explainer document.

Like I said, I’m not expecting anything to happen anytime soon, but it would be really good to gather as much data as possible around existing usage of the Web Share API. If you’re using it, or you know anyone who’s using it, please, please, please take a moment to provide a quick description. And if you could help spread the word to get that issue in front of as many devs as possible, I’d be very grateful.

(Many thanks to everyone who’s already contributed to that issue—much appreciated!)

The reason for a share button type

If you’re at all interested in what I wrote about a declarative Web Share API—and its sequel, a polyfill for button type=”share”—then you might be interested in an explainer document I’ve put together.

It’s a useful exercise for me to enumerate the reasoning for button type=“share” in one place. If you have any feedback, feel free to fork it or create an issue.

The document is based on my initial blog posts and the discussion that followed in this issue on the repo for the Web Share API. In that thread I got some pushback from Marcos. There are three points he makes. I think that two of them lack merit, but the third one is actually spot on.

Here’s the first bit of pushback:

Apart from placing a button in the content, I’m not sure what the proposal offers over what (at least one) browser already provides? For instance, Safari UI already provides a share button by default on every page

But that is addressed in the explainer document for the Web Share API itself:

The browser UI may not always be available, e.g., when a web app has been installed as a standalone/fullscreen app.

That’s exactly what I wanted to address. Browser UI is not always available and as progressive web apps become more popular, authors will need to provide a way for users to share the current URL—something that previously was handled by browsers.

That use-case of sharing the current page leads nicely into the second bit of pushback:

The API is specialized… using it to share the same page is kinda pointless.

But again, the explainer document for the Web Share API directly contradicts this:

Sharing the page’s own URL (a very common case)…

Rather than being a difference of opinion, this is something that could be resolved with data. I’d really like to find out how people are currently using the Web Share API. How much of the current usage falls into the category of “share the current page”? I don’t know the best way to gather this data though. If you have any ideas, let me know. I’ve started an issue where you can share how you’re using the Web Share API. Or if you’re not using the Web Share API, but you know someone who is, please let them know.

Okay, so those first two bits of pushback directly contradict what’s in the explainer document for the Web Share API. The third bit of pushback is more philosophical and, I think, more interesting.

The Web Share API explainer document does a good job of explaining why a declarative solution is desirable:

The link can be placed declaratively on the page, with no need for a JavaScript click event handler.

That’s also my justification for having a declarative alternative: it would be easier for more people to use. I said:

At a fundamental level, declarative technologies have a lower barrier to entry than imperative technologies.

But Marcos wrote:

That’s demonstrably false and a common misconception: See OWL, XForms, SVG, or any XML+namespace spec. Even HTML is poorly understood, but it just happens to have extremely robust error recovery (giving the illusion of it being easy). However, that’s not a function of it being “declarative”.

He’s absolutely right.

It’s not so much that I want a declarative option—I want an option that has robust error recovery. After all, XML is a declarative language but its error handling is as strict as an imperative language like JavaScript: make one syntactical error and nothing works. XML has a brittle error-handling model by design. HTML and CSS have extremely robust error recovery by design. It’s that error-handling model that gives HTML and CSS their robustness.

I’ve been using the word “declarative” when I actually meant “robust in handling errors”.

I guess that when I’ve been talking about “a declarative solution”, I’ve been thinking in terms of the three languages parsed by browsers: HTML, CSS, and JavaScript. Two of those languages are declarative, and those two also happen to have much more forgiving error-handling than the third language. That’s the important part—the error handling—not the fact that they’re declarative.

I’ve been using “declarative” as a shorthand for “either HTML or CSS”, but really I should try to be more precise in my language. The word “declarative” covers a wide range of possible languages, and not all of them lower the barrier to entry. A declarative language with a brittle error-handling model is as daunting as an imperative language.

I should try to use a more descriptive word than “declarative” when I’m describing HTML or CSS. Resilient? Robust?

With that in mind, button type=“share” is worth pursuing. Yes, it’s a declarative option for using the Web Share API, but more important, it’s a robust option for using the Web Share API.

I invite you to read the explainer document for a share button type and I welcome your feedback …especially if you’re currently using the Web Share API!

Downloading from Google Fonts

If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.

But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.

This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.

I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.

Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?

It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.

Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.

Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?

Like I said, it’s all very strange.

Web browsers on iOS

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to. The app store doesn’t allow other browsers to be listed. The apps called Chrome and Firefox are little more than skinned versions of Safari.

If you’re a web developer, there are two possible reactions to hearing this. One is “Duh! Everyone knows that!”. The other is “What‽ I never knew that!”

If you fall into the first category, I’m guessing you’ve been a web developer for a while. The fact that Safari is the only browser on iOS devices is something you’ve known for years, and something you assume everyone else knows. It’s common knowledge, right?

But if you’re relatively new to web development—heck, if you’ve been doing web development for half a decade—you might fall into the second category. After all, why would anyone tell you that Safari is the only browser on iOS? It’s common knowledge, right?

So that’s the situation. Safari is the only browser that can run on iOS. The obvious follow-on question is: why?

Apple at this point will respond with something about safety and security, which are certainly important priorities. So let me rephrase the question: why on iOS?

Why can I install Chrome or Firefox or Edge on my Macbook running macOS? If there are safety or security reasons for preventing me from installing those browsers on my iOS device, why don’t those same concerns apply to my macOS device?

At one time, the mobile operating system—iOS—was quite different to the desktop operating system—OS X. Over time the gap has narrowed. At this point, the operating systems are converging. That makes sense. An iPhone, an iPad, and a Macbook aren’t all that different apart from the form factor. It makes sense that computing devices from the same company would share an underlying operating system.

As this convergence continues, the browser question is going to have to be decided in one direction or the other. As it is, Apple’s laptops and desktops strongly encourage you to install software from their app store, though it is still possible to install software by other means. Perhaps they’ll decide that their laptops and desktops should only be able to install software from their app store—a decision they could justify with safety and security concerns.

Imagine that situation. You buy a computer. It comes with one web browser pre-installed. You can’t install a different web browser on your computer.

You wouldn’t stand for it! I mean, Microsoft got fined for anti-competitive behaviour when they pre-bundled their web browser with Windows back in the 90s. You could still install other browsers, but just the act of pre-bundling was seen as an abuse of power. Imagine if Windows never allowed you to install Netscape Navigator?

And yet that’s exactly the situation in 2020.

You buy a computing device from Apple. It might be a Macbook. It might be an iPad. It might be an iPhone. But you can only install your choice of web browser on one of those devices. For now.

It is contradictory. It is hypocritical. It is indefensible.

A polyfill for button type=”share”

After writing about a declarative Web Share API here yesterday I thought I’d better share the idea (see what I did there?).

I opened an issue on the Github repo for the spec.

(I hope that’s the right place for this proposal. I know that in the past ideas were kicked around on the Discourse site for Web platform Incubator Community Group but I can’t stand Discourse. It literally requires JavaScript to render anything to the screen even though the entire content is text. If it turns out that that is the place I should’ve posted, I guess I’ll hold my nose and do it using the most over-engineered reinvention of the browser I’ve ever seen. But I believe that the plan is for WICG to migrate proposals to Github anyway.)

I also realised that, as the JavaScript Web Share API already exists, I can use it to polyfill my suggestion for:

<button type="share">

The polyfill also demonstrates how feature detection could work. Here’s the code.

This polyfill takes an Inception approach to feature detection. There are three nested levels:

  1. This browser supports button type="share". Great! Don’t do anything. Otherwise proceed to level two.
  2. This browser supports the JavaScript Web Share API. Use that API to share the current page URL and title. Otherwise proceed to level three.
  3. Use a mailto: link to prefill an email with the page title as the subject and the URL in the body. Ya basic!

The idea is that, as long as you include the 20 lines of polyfill code, you could start using button type="share" in your pages today.

I’ve made a test page on Codepen. I’m just using plain text in the button but you could use a nice image or SVG or combination. You can use the Codepen test page to observe two of the three possible behaviours browsers could exhibit:

  1. A browser supports button type="share". Currently that’s none because I literally made this shit up yesterday.
  2. A browser supports the JavaScript Web Share API. This is Safari on Mac, Edge on Windows, Safari on iOS, and Chrome, Samsung Internet, and Firefox on Android.
  3. A browser supports neither button type="share" nor the existing JavaScript Web Share API. This is Firefox and Chrome on desktop (and Edge if you’re on a Mac).

See the Pen Polyfill for button type=”share" by Jeremy Keith (@adactio) on CodePen.

The polyfill doesn’t support Internet Explorer 11 or lower because it uses the DOM closest() method. Feel free to fork and rewrite if you need to support old IE.

A declarative Web Share API

I’ve written about the Web Share API before. It’s a handy little bit of JavaScript that—if supported—brings up a system-level way of sharing a page. Seeing as it probably won’t be long before we won’t be able to see full URLs in browsers anymore, it’s going to fall on us as site owners to provide this kind of fundamental access.

Right now the Web Share API exists entirely in JavaScript. There are quite a few browser APIs like that, and it always feels like a bit of a shame to me. Ideally there should be a JavaScript API and a declarative option, even if the declarative option isn’t as powerful.

Take form validation. To cover the most common use cases, you probably only need to use declarative markup like input type="email" or the required attribute. But if your use case gets a bit more complicated, you can reach for the Constraint Validation API in JavaScript.

I like that pattern. I wish it were an option for JavaScript-only APIs. Take the Geolocation API, for example. Right now it’s only available through JavaScript. But what if there were an input type="geolocation" ? It wouldn’t cover all use cases, but it wouldn’t have to. For the simple case of getting someone’s location (like getting someone’s email or telephone number), it would do. For anything more complex than that, that’s what the JavaScript API is for.

I was reminded of this recently when Ada Rose Cannon tweeted:

It really feels like there should be a semantic version of the share API, like a mailto: link

I think she’s absolutely right. The Web Share API has one primary use case: let the user share the current page. If that use case could be met in a declarative way, then it would have a lower barrier to entry. And for anyone who needs to do something more complicated, that’s what the JavaScript API is for.

But Pete LePage asked:

How would you feature detect for it?

Good question. With the JavaScript API you can do feature detection—if the API isn’t supported you can either bail or provide your own implementation.

There a few different ways of extending HTML that allow you to provide a fallback for non-supporting browsers.

You could mint a new element with a content model that says “Browsers, if you do support this element, ignore everything inside the opening and closing tags.” That’s the way that the canvas element works. It’s the same for audio and video—they ignore everything inside them that isn’t a source element. So developers can provide a fallback within the opening and closing tags.

But forging a new element would be overkill for something like the Web Share API (or Geolocation). There are more subtle ways of extending HTML that I’ve already alluded to.

Making a new element is a lot of work. Making a new attribute could also be a lot of work. But making a new attribute value might hit the sweet spot. That’s why I suggested something like input type="geolocation" for the declarative version of the Geolocation API. There’s prior art here; this is how we got input types for email, url, tel, color, date, etc. The key piece of behaviour is that non-supporting browsers will treat any value they don’t understand as “text”.

I don’t think there should be input type="share". The action of sharing isn’t an input. But I do think we could find an existing HTML element with an attribute that currently accepts a list of possible values. Adding one more value to that list feels like an inexpensive move.

Here’s what I suggested:

<button type=”share” value=”title,text”>

For non-supporting browsers, it’s a regular button and needs polyfilling, no different to the situation with the JavaScript API. But if supported, no JS needed?

The type attribute of the button element currently accepts three possible values: “submit”, “reset”, or “button”. If you give it any other value, it will behave as though you gave it a type of “submit” or “button” (depending on whether it’s in a form or not) …just like an unknown type value on an input element will behave like “text”.

If a browser supports button type="share”, then when the user clicks on it, the browser can go “Right, I’m handing over to the operating system now.”

There’s still the question of how to pass data to the operating system on what gets shared. Currently the JavaScript API allows you to share any combination of URL, text, and description.

Initially I was thinking that the value attribute could be used to store this data in some kind of key/value pairing, but the more I think about it, the more I think that this aspect should remain the exclusive domain of the JavaScript API. The declarative version could grab the current URL and the value of the page’s title element and pass those along to the operating system. If you need anything more complex than that, use the JavaScript API.

So what I’m proposing is:

<button type="share">

That’s it.

But how would you test for browser support? The same way as you can currently test for supported input types. Make use of the fact that an element’s attribute value and an element’s property value (which 99% of the time are the same), will be different if the attribute value isn’t supported:

var testButton = document.createElement("button");
testButton.setAttribute("type","share");
if (testButton.type != "share") {
// polyfill
}

So that’s my modest proposal. Extend the list of possible values for the type attribute on the button element to include “share” (or something like that). In supporting browsers, it triggers a very bare-bones handover to the OS (the current URL and the current page title). In non-supporting browsers, it behaves like a button currently behaves.

T E N Ǝ T

Jessica and I went to cinema yesterday.

Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.

But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.

We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.

The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.

At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.

I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.

The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.

The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.

Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”

The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.

Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.

In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.

The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).

The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.

In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).

In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.

Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.

That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.

In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.

The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.

Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.

Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.

It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.

Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.

The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.

The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…

There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.

Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.

This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.

The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.

This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:

It’s essential that you behave as if your decisions matter, even though you know that they don’t.

That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.

As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.

The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.

At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.

Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.

But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?

This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.

So Tenet has three instances of misdirection in its narrative:

  • Inversion isn’t time travel (it absolutely is).
  • Decisions matter (they don’t; there is no free will).
  • The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).

I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.

I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.