Journal tags: medium

463

sparkline

Performance and people

I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.

Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).

It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).

I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.

This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.

The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using link rel=“preload” in the HTML) or adjusting the font loading (using font-display in the CSS) or even nerdier stuff like subsetting.

In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that @font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!

The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.

But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.

I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.

But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.

It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!

At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

Remember when I did a countdown of the top four web performance challenges? At the number one spot is other people’s JavaScript.

Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!

More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.

Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.

Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?

I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.

Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.

Web browsers on iOS

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to. The app store doesn’t allow other browsers to be listed. The apps called Chrome and Firefox are little more than skinned versions of Safari.

If you’re a web developer, there are two possible reactions to hearing this. One is “Duh! Everyone knows that!”. The other is “What‽ I never knew that!”

If you fall into the first category, I’m guessing you’ve been a web developer for a while. The fact that Safari is the only browser on iOS devices is something you’ve known for years, and something you assume everyone else knows. It’s common knowledge, right?

But if you’re relatively new to web development—heck, if you’ve been doing web development for half a decade—you might fall into the second category. After all, why would anyone tell you that Safari is the only browser on iOS? It’s common knowledge, right?

So that’s the situation. Safari is the only browser that can run on iOS. The obvious follow-on question is: why?

Apple at this point will respond with something about safety and security, which are certainly important priorities. So let me rephrase the question: why on iOS?

Why can I install Chrome or Firefox or Edge on my Macbook running macOS? If there are safety or security reasons for preventing me from installing those browsers on my iOS device, why don’t those same concerns apply to my macOS device?

At one time, the mobile operating system—iOS—was quite different to the desktop operating system—OS X. Over time the gap has narrowed. At this point, the operating systems are converging. That makes sense. An iPhone, an iPad, and a Macbook aren’t all that different apart from the form factor. It makes sense that computing devices from the same company would share an underlying operating system.

As this convergence continues, the browser question is going to have to be decided in one direction or the other. As it is, Apple’s laptops and desktops strongly encourage you to install software from their app store, though it is still possible to install software by other means. Perhaps they’ll decide that their laptops and desktops should only be able to install software from their app store—a decision they could justify with safety and security concerns.

Imagine that situation. You buy a computer. It comes with one web browser pre-installed. You can’t install a different web browser on your computer.

You wouldn’t stand for it! I mean, Microsoft got fined for anti-competitive behaviour when they pre-bundled their web browser with Windows back in the 90s. You could still install other browsers, but just the act of pre-bundling was seen as an abuse of power. Imagine if Windows never allowed you to install Netscape Navigator?

And yet that’s exactly the situation in 2020.

You buy a computing device from Apple. It might be a Macbook. It might be an iPad. It might be an iPhone. But you can only install your choice of web browser on one of those devices. For now.

It is contradictory. It is hypocritical. It is indefensible.

Kinopio

Cennydd asked for recommendations on Twitter a little while back:

Can anyone recommend an outlining app for macOS? I’m falling out with OmniOutliner. Not Notion, please.

This was my response:

The only outlining tool that makes sense for my brain is https://kinopio.club/

It’s more like a virtual crazy wall than a virtual Dewey decimal system.

I’ve written before about how I prepare a conference talk. The first step involves a sheet of A3 paper:

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

Kinopio is like a digital version of that A3 sheet of paper. It doesn’t force any kind of hierarchy on your raw ingredients. You can clump things together, join them up, break them apart, or just dump everything down in one go. That very much suits my approach to preparing something like a talk (or a book). The act of organising all the parts into a single narrative timeline is an important challenge, but it’s one that I like to defer to later. The first task is braindumping.

When I was preparing my talk for An Event Apart Online, I used Kinopio.club to get stuff out of my head. Here’s the initial brain dump. Here are the final slides. You can kind of see the general gist of the slidedeck in the initial brain dump, but I really like that I didn’t have to put anything into a sequential outline.

In some ways, Kinopio is like an anti-outlining tool. It’s scrappy and messy—which is exactly why it works so well for the early part of the process. If I use a tool that feels too high-fidelity too early on, I get a kind of impedence mismatch between the state of the project and the polish of the artifact.

I like that Kinopio feels quite personal. Unlike Google Docs or other more polished tools, the documents you make with this aren’t really for sharing. Still, I thought I’d share my scribblings anyway.

A polyfill for button type=”share”

After writing about a declarative Web Share API here yesterday I thought I’d better share the idea (see what I did there?).

I opened an issue on the Github repo for the spec.

(I hope that’s the right place for this proposal. I know that in the past ideas were kicked around on the Discourse site for Web platform Incubator Community Group but I can’t stand Discourse. It literally requires JavaScript to render anything to the screen even though the entire content is text. If it turns out that that is the place I should’ve posted, I guess I’ll hold my nose and do it using the most over-engineered reinvention of the browser I’ve ever seen. But I believe that the plan is for WICG to migrate proposals to Github anyway.)

I also realised that, as the JavaScript Web Share API already exists, I can use it to polyfill my suggestion for:

<button type="share">

The polyfill also demonstrates how feature detection could work. Here’s the code.

This polyfill takes an Inception approach to feature detection. There are three nested levels:

  1. This browser supports button type="share". Great! Don’t do anything. Otherwise proceed to level two.
  2. This browser supports the JavaScript Web Share API. Use that API to share the current page URL and title. Otherwise proceed to level three.
  3. Use a mailto: link to prefill an email with the page title as the subject and the URL in the body. Ya basic!

The idea is that, as long as you include the 20 lines of polyfill code, you could start using button type="share" in your pages today.

I’ve made a test page on Codepen. I’m just using plain text in the button but you could use a nice image or SVG or combination. You can use the Codepen test page to observe two of the three possible behaviours browsers could exhibit:

  1. A browser supports button type="share". Currently that’s none because I literally made this shit up yesterday.
  2. A browser supports the JavaScript Web Share API. This is Safari on Mac, Edge on Windows, Safari on iOS, and Chrome, Samsung Internet, and Firefox on Android.
  3. A browser supports neither button type="share" nor the existing JavaScript Web Share API. This is Firefox and Chrome on desktop (and Edge if you’re on a Mac).

See the Pen Polyfill for button type=”share" by Jeremy Keith (@adactio) on CodePen.

The polyfill doesn’t support Internet Explorer 11 or lower because it uses the DOM closest() method. Feel free to fork and rewrite if you need to support old IE.

A declarative Web Share API

I’ve written about the Web Share API before. It’s a handy little bit of JavaScript that—if supported—brings up a system-level way of sharing a page. Seeing as it probably won’t be long before we won’t be able to see full URLs in browsers anymore, it’s going to fall on us as site owners to provide this kind of fundamental access.

Right now the Web Share API exists entirely in JavaScript. There are quite a few browser APIs like that, and it always feels like a bit of a shame to me. Ideally there should be a JavaScript API and a declarative option, even if the declarative option isn’t as powerful.

Take form validation. To cover the most common use cases, you probably only need to use declarative markup like input type="email" or the required attribute. But if your use case gets a bit more complicated, you can reach for the Constraint Validation API in JavaScript.

I like that pattern. I wish it were an option for JavaScript-only APIs. Take the Geolocation API, for example. Right now it’s only available through JavaScript. But what if there were an input type="geolocation" ? It wouldn’t cover all use cases, but it wouldn’t have to. For the simple case of getting someone’s location (like getting someone’s email or telephone number), it would do. For anything more complex than that, that’s what the JavaScript API is for.

I was reminded of this recently when Ada Rose Cannon tweeted:

It really feels like there should be a semantic version of the share API, like a mailto: link

I think she’s absolutely right. The Web Share API has one primary use case: let the user share the current page. If that use case could be met in a declarative way, then it would have a lower barrier to entry. And for anyone who needs to do something more complicated, that’s what the JavaScript API is for.

But Pete LePage asked:

How would you feature detect for it?

Good question. With the JavaScript API you can do feature detection—if the API isn’t supported you can either bail or provide your own implementation.

There a few different ways of extending HTML that allow you to provide a fallback for non-supporting browsers.

You could mint a new element with a content model that says “Browsers, if you do support this element, ignore everything inside the opening and closing tags.” That’s the way that the canvas element works. It’s the same for audio and video—they ignore everything inside them that isn’t a source element. So developers can provide a fallback within the opening and closing tags.

But forging a new element would be overkill for something like the Web Share API (or Geolocation). There are more subtle ways of extending HTML that I’ve already alluded to.

Making a new element is a lot of work. Making a new attribute could also be a lot of work. But making a new attribute value might hit the sweet spot. That’s why I suggested something like input type="geolocation" for the declarative version of the Geolocation API. There’s prior art here; this is how we got input types for email, url, tel, color, date, etc. The key piece of behaviour is that non-supporting browsers will treat any value they don’t understand as “text”.

I don’t think there should be input type="share". The action of sharing isn’t an input. But I do think we could find an existing HTML element with an attribute that currently accepts a list of possible values. Adding one more value to that list feels like an inexpensive move.

Here’s what I suggested:

<button type=”share” value=”title,text”>

For non-supporting browsers, it’s a regular button and needs polyfilling, no different to the situation with the JavaScript API. But if supported, no JS needed?

The type attribute of the button element currently accepts three possible values: “submit”, “reset”, or “button”. If you give it any other value, it will behave as though you gave it a type of “submit” or “button” (depending on whether it’s in a form or not) …just like an unknown type value on an input element will behave like “text”.

If a browser supports button type="share”, then when the user clicks on it, the browser can go “Right, I’m handing over to the operating system now.”

There’s still the question of how to pass data to the operating system on what gets shared. Currently the JavaScript API allows you to share any combination of URL, text, and description.

Initially I was thinking that the value attribute could be used to store this data in some kind of key/value pairing, but the more I think about it, the more I think that this aspect should remain the exclusive domain of the JavaScript API. The declarative version could grab the current URL and the value of the page’s title element and pass those along to the operating system. If you need anything more complex than that, use the JavaScript API.

So what I’m proposing is:

<button type="share">

That’s it.

But how would you test for browser support? The same way as you can currently test for supported input types. Make use of the fact that an element’s attribute value and an element’s property value (which 99% of the time are the same), will be different if the attribute value isn’t supported:

var testButton = document.createElement("button");
testButton.setAttribute("type","share");
if (testButton.type != "share") {
// polyfill
}

So that’s my modest proposal. Extend the list of possible values for the type attribute on the button element to include “share” (or something like that). In supporting browsers, it triggers a very bare-bones handover to the OS (the current URL and the current page title). In non-supporting browsers, it behaves like a button currently behaves.

T E N Ǝ T

Jessica and I went to cinema yesterday.

Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.

But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.

We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.

The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.

At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.

I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.

The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.

The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.

Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”

The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.

Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.

In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.

The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).

The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.

In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).

In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.

Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.

That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.

In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.

The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.

Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.

Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.

It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.

Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.

The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.

The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…

There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.

Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.

This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.

The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.

This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:

It’s essential that you behave as if your decisions matter, even though you know that they don’t.

That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.

As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.

The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.

At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.

Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.

But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?

This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.

So Tenet has three instances of misdirection in its narrative:

  • Inversion isn’t time travel (it absolutely is).
  • Decisions matter (they don’t; there is no free will).
  • The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).

I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.

I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.

Web on the beach

It was very hot here in England last week. By late afternoon, the stuffiness indoors was too much to take.

If you can’t stand the heat, get out of the kitchen. That’s exactly what Jessica and I did. The time had come for us to avail of someone else’s kitchen. For the first time in many months, we ventured out for an evening meal. We could take advantage of the government discount scheme with the very unfortunate slogan, “eat out to help out.” (I can’t believe that no one in that meeting said something.)

Just to be clear, we wanted to dine outdoors. The numbers are looking good in Brighton right now, but we’re both still very cautious about venturing into indoor spaces, given everything we know now about COVID-19 transmission.

Fortunately for us, there’s a new spot on the seafront called Shelter Hall Raw. It’s a collective of multiple local food outlets and it has ample outdoor seating.

We found a nice table for two outside. Then we didn’t flag down a waiter.

Instead, we followed the instructions on the table. I say instructions, but it was a bit simpler than that. It was a URL: shelterhall.co.uk (there was also a QR code next to the URL that I could’ve just pointed my camera at, but I’ve developed such a case of QR code blindness that I blanked that out initially).

Just to be clear, under the current circumstances, this is the only way to place an order at this establishment. The only (brief) interaction you’ll have with another persn is when someone brings your order.

It worked a treat.

We had frosty beverages chosen from the excellent selection of local beers. We also had fried chicken sandwiches from Lost Boys chicken, purveyors of the best wings in town.

The whole experience was a testament to what the web can do. You browse the website. You make your choice on the website. You pay on the website (you can create an account but you don’t have to).

Thinking about it, I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

It hasn’t been a great week for the web. Layoffs at Mozilla. Google taking aim at URLs. It felt good to see experience an instance of the web really shining.

And it felt really good to have that cold beer.

Checked in at Shelter Hall Raw. Having a beer on the beach — with Jessica

Design Principles For The Web—the links

I’m speaking today at an online edition of An Event Apart called Front-End Focus. I’ll be opening up the show with a talk called Design Principles For The Web, which ironically doesn’t have much of a front-end focus:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

That explains why I’ve been writing so much about design principles …well, that and the fact that I’m mildly obsessed with them.

To avoid technical difficulties, I’ve pre-recorded the talk. So while that’s playing, I’ll be spamming the accompanying chat window with related links. Then I’ll do a live Q&A.

Should you be interested in the links that I’ll be bombarding the attendees with, I’ve gathered them here in one place (and they’re also on the website of An Event Apart). The narrative structure of the talk might not be clear from scanning down a list of links, but there’s some good stuff here that you can dive into if you want to know what the inside of my head is like.

References

adactio.com

Wikipedia

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Netlify redirects and downloads

Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.

Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:

  • Open a text editor and type out HTML and CSS.

Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:

  • Time spent setting up build tools: 00:00
  • Time spent wrangling the pipeline to do exactly what you want: 00:00
  • Time spent trying to get the damn build tools to work again when you return to the project after leaving it alone for more than a few months: 00:00:00

I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.

I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.

Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!

The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.

I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a element with an href attribute.

I throw in one more attribute on that link. The download attribute tells the browser that the URL in the href attribute should be downloaded instead of visited. If you give a value for the download attribute, it will over-ride the file name:

<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>

Or you can use it as a Boolean attribute without any value if you’re happy with the file name:

<a href="/files/nice-file-name.xyz" download>download</a>

There’s one catch though. The download attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com but my audio files are hosted on clearleft-audio.s3.amazonaws.com—the download attribute will be ignored and the mp3 files will play in the browser instead of downloading.

Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.

I added a file called _redirects to the root of my project. It contains one line:

/download/*  https://clearleft-audio.s3.amazonaws.com/podcast/:splat  200

That says that any URLs beginning with /download/ should redirect to clearleft-audio.s3.amazonaws.com/podcast/. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.

Oh, and the 200at the end is the status code: okay.

Now I can use this /download/ path in my link:

<a href="/download/season01episode06.mp3" download>Download mp3</a>

Because this URL on the same origin, the download attribute works just fine.

Season one of the Clearleft podcast

The Clearleft Podcast has finished its inaugural season.

I have to say, I’m pretty darned pleased with the results. It was equal parts fun and hard work.

Episode One

Design Systems. This was a deliberately brief episode that just skims the surface of all that design systems have to offer. It is almost certainly a theme that I’ll revisit in a later episode, or even a whole season.

The main goal of this episode was to get some answers to the questions:

  1. What is a design system exactly? and
  2. What’s a design system good for?

I’m not sure if I got answers or just more questions, but that’s no bad thing.

Episode Two

Service Design. This is the classic topic for this season—an investigation into a phrase that you’ve almost certainly heard of, but might not understand completely. Or maybe that’s just me. In any case, I think that coming at this topic from a viewpoint of relative ignorance is quite a benefit: I have no fear of looking stupid for asking basic questions.

Episode Three

Wildlife Photographer Of The Year. A case study. This one was a lot of fun to put together.

It also really drove home to just how talented and hard-working my colleagues at Clearleft are. I just kept thinking, “Damn! This is some great work!

Episode Four

Design Ops. Again, a classic example of me asking the dumb questions. What is this “design ops” thing I’ve heard of? Where’d it come from?

My favourite bit of feedback was “Thanks to the podcast, I now know what DesignOps is. I now also hate DesignOps”

I couldn’t resist upping the ante into a bit of a meta-discussion about whether we benefit or not from the introduction of new phrases like this into our work.

Episode Five

Design Maturity. This could’ve been quite a dry topic but I think that Aarron made it really engaging. Maybe the samples from Bladerunner and Thunderbirds helped too.

This episode finished with a call to action …with the wrong URL. Doh! It should’ve been surveymonkey.co.uk/r/designmaturity

Episode Six

Design Sprints. I like how the structure of this one turned out. I felt like we tackled quite a few angles in less than 25 minutes.

That’s a good one to wrap up this season, I reckon.

If you’re interested in the behind-the-scenes work that went into each episode, I’ve been blogging about each one:

  1. Design Systems
  2. Service Design
  3. Wildlife Photographer Of The Year
  4. Design Ops
  5. Design Maturity
  6. Design Sprints

I’m already excited about doing a second season …though I’m going to enjoy a little break from podcasting for a little bit.

As I say at the end of most episodes, if you’ve got any feedback to offer on the podcast, send me an email at jeremy@clearleft.com

And if you’ve enjoyed the Clearleft podcast—or a particular episode—please share it far and wide.

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

Design sprints on the Clearleft podcast

The sixth episode of the Clearleft podcast is now live: design sprints!

It comes in at just under 24 minutes, which feels just about right to me. Once again, it’s a dive into one topic that asks “What is this?”, “What does this mean?”, and “Where did this come from?”

I could’ve invited just about any of the practitioners at Clearleft to join me on this one, but I setttled on Chris, who’s always erudite and sharp.

I also asked ex-Clearleftie Jerlyn to have a chat. You’ll notice that’s been a bit of theme on the Clearleft podcast; asking people who used to work at Clearleft to share their thoughts. I’d quite like to do at least an episode—maybe even a whole season—featuring ex-Clearlefties exclusively. So many great people have worked at the agency of the years, Jerlyn being a prime example.

I’d also like to do an episode some time with the regular contractors we’ve worked with at Clearleft. On this episode, I asked the super-smart Tom Prior to join me.

I recorded those three chats over the past couple of weeks. And it was kind of funny how there was, of course, a looming presence over the topic of design sprints: Jake Knapp. I had sent him an email too but I got an auto-responder saying that he was super busy and would take a while to respond. So I kind of mentally wrote it off.

I spent last week assembling and editing the podcast with the excellent contributions from Jerlyn, Chris, and Tom. But it did feel a bit like Waiting For Godot the way that Jake’s book was being constantly referenced.

Then, on the weekend, Godot showed up.

Jake said he’d have time for a chat on Wednesday. Aargh! That’s the release date for the podcast! I don’t suppose Monday would work?

Very graciously, Jake agreed to a Monday chat (at an ungodly early hour in his time zone). I got an excellent half hour of material straight from the horse’s mouth—a very excitable and fast-talking horse, too.

That left me with just a day to work the material into the episode! I felt like a journalist banging on the keyboard at midnight, ready to run into the printing room shouting “Stop the press!” …although I’m sure the truth is that nobody but me would notice if an episode were released a little late.

Anyway, it all got done in the end and I think it turned out pretty great!

Have a listen for yourself and see what you make of it.

This was the final episode of the first season. I’ll now take a little break from podcasting as I plot and plan for the next season. Watch this space! …and, y’know, subscribe to the podcast.

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Dream speak

I had a double-whammy of a stress dream during the week.

I dreamt I was at a conference where I was supposed to be speaking, but I wasn’t prepared, and I wasn’t where I was supposed to be when I was supposed to be there. Worse, my band were supposed to be playing a gig on the other side of town at the same time. Not only was I panicking about getting myself and my musical equipment to the venue on time, I was also freaking out because I couldn’t remember any of the songs.

You don’t have to be Sigmund freaking Freud to figure out the meanings behind these kinds of dreams. But usually these kind of stress dreams are triggered by some upcoming event like, say, oh, I don’t know, speaking at a conference or playing a gig.

I felt really resentful when I woke up from this dream in a panic in the middle of the night. Instead of being a topical nightmare, I basically had the equivalent of one of those dreams where you’re back at school and it’s the day of the exam and you haven’t prepared. But! When, as an adult, you awake from that dream, you have that glorious moment of remembering “Wait! I’m not in school anymore! Hallelujah!” Whereas with my double-booked stress dream, I got all the stress of the nightmare, plus the waking realisation that “Ah, shit. There are no more conferences. Or gigs.”

I miss them.

Mind you, there is talk of re-entering the practice room at some point in the near future. Playing gigs is still a long way off, but at least I could play music with other people.

Actually, I got to play music with other people this weekend. The music wasn’t Salter Cane, it was traditional Irish music. We gathered in a park, and played together while still keeping our distance. Jessica has written about it in her latest journal entry:

It wasn’t quite a session, but it was the next best thing, and it was certainly the best we’re going to get for some time. And next week, weather permitting, we’ll go back and do it again. The cautious return of something vaguely resembling “normality”, buoying us through the hot days of a very strange summer.

No chance of travelling to speak at a conference though. On the plus side, my carbon footprint has never been lighter.

Online conferences continue. They’re not the same, but they can still be really worthwhile in their own way.

I’ll be speaking at An Event Apart: Front-end Focus on Monday, August 17th (and I’m very excited to see Ire’s talk). I’ll be banging on about design principles for the web:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

Tickets are $350 but I can get you a discount. Use the code AEAJER to get $50 off.

I wonder if I’ll have online-appropriate stress dreams in the next week? “My internet is down!”, “I got the date and time wrong!”, “I’m not wearing any trousers!”

Actually, that’s pretty much just my waking life these days.

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Design maturity on the Clearleft podcast

The latest episode of the Clearleft podcast is zipping through the RSS tubes towards your podcast-playing software of choice. This is episode five, the penultimate episode of this first season.

This time the topic is design maturity. Like the episode on design ops, this feels like a hefty topic where the word “scale” will inevitably come up.

I talked to my fellow Clearlefties Maite and Andy about their work on last year’s design effectiveness report. But to get the big-scale picture, I called up Aarron over at Invision.

What a great guest! I already had plans to get Aarron on the podcast to talk about his book, Designing For Emotion—possibly a topic for next season. But for the current episode, we didn’t even mention it. It was design maturity all the way.

I had a lot of fun editing the episode together. I decided to intersperse some samples. If you’re familiar with Bladerunner and Thunderbirds, you’ll recognise the audio.

The whole thing comes out at a nice 24 minutes in length.

Have a listen and see what you make of it.

Design ops on the Clearleft podcast

The latest episode of the Clearleft podcast is out. If you’re a subscriber, it will magically appear in your podcast software of choice using the power of RSS. If you’re not a subscriber, it isn’t too late to change that.

This week’s episode is all about design ops. I began contructing the episode by gathering raw material from talks at Leading Design. There’s good stuff from Kim Fellman, Jacqui Frey, Courtney Kaplan, and Meredith Black.

But as I was putting the snippets together, I felt like the episode was missing something. It needed a bit of oomph. So I harangued Andy for some of his time. I wasn’t just fishing for spicy hot takes—something that Andy is known for. Andy is also the right person to explain design ops. If you search for that term, one of the first results you’ll get is a post he wrote on the Clearleft blog a few years back called Design Ops — A New Discipline.

I remember helping Andy edit that post and I distinctly recall my feedback. The initial post didn’t have a definition of the term, and I felt that a definition was necessary (and Andy added one to the post).

My cluenessness about the meaning of terms like “design ops” or “service design” isn’t some schtick I’m putting on for the benefit of the podcast. I’m genuinely trying to understand these terms better. I don’t like the feeling of hearing a term being used a lot without a clear understanding of what that term means. All too often my understanding feels more like “I think I know it means, but I’m not sure I could describe it.” I’m not comfortable with that.

Making podcast episodes on these topics—which are outside my comfort zone—has been really helpful. I now feel like I have a much better understanding of service design, design ops and other topical terms. I hope that the podcast will be just as helpful for listeners who feel as bamboozled as I do.

Ben Holliday recently said:

The secret of design being useful in many places is not talking about design too much and just getting on with it. I sometimes think we create significant language barriers with job titles, design theory and making people learn a new language for the privilege of working with us.

I think there’s some truth to that. Andy disagrees. Strongly.

In our chat, Andy and I had what politicians would describe as “a robust discussion.” I certainly got some great material for the podcast (though some of the more contentious bits remain on the cutting room floor).

Calling on Andy for this episode was definitely the right call. I definitely got the added oomph I was looking for. In fact, this ended up being one of my favourite episodes.

There’s a lot of snappy editing, all in service of crafting a compelling narrative. First, there’s the origin story of design ops. Then there’s an explanation of what it entails. From around the 13 minute mark, there’s a pivot—via design systems—into asking whether introducing a new term is exclusionary. That’s when the sparks start to fly. Finally, I pull it back to talking about how Clearleft can help in providing design ops as a service.

The whole episode comes out at 21 minutes, which feels just right to me.

I’m really pleased with how this episode turned out, and I hope you’ll like it too. Have a listen and decide for yourself.

Wildlife Photographer Of The Year on the Clearleft podcast

Episode three of the Clearleft podcast is here!

This one is a bit different. Whereas previous episodes focused on specific topics—design systems, service design—this one is a case study. And, wow, what a case study! The whole time I was putting the episode together, I kept thinking “The team really did some excellent work here.”

I’m not sure what makes more sense: listen to the podcast episode first and then visit the site in question …or the other way around? Maybe the other way around. In which case, be sure to visit the website for Wildlife Photographer Of The Year.

That’s right—Clearleft got to work with London’s Natural History Museum! A real treat.

Myself and @dhuntrods really enjoyed our visit to the digitisation department in the Natural History Museum. Thanks, Jen, Josh, Robin, Phaedra, and @scuff_el!

This episode of the podcast ended up being half an hour long. It should probably be shorter but I just couldn’t bring myself to cut any of the insights that Helen, James, Chris, and Trys were sharing. I’m probably too close to the subject matter to be objective about it. I’m hoping that others will find it equally fascinating to hear about the process of the project. Research! Design! Dev! This has got it all.

I had a lot of fun with the opening of the episode. I wanted to create a montage effect like the scene-setting opening of a film that has overlapping news reports. I probably spent far too long doing it but I’m really happy with the final result.

And with this episode, we’re halfway through the first season of the podcast already! I figured a nice short run of six episodes is enough to cover a fair bit of ground and give a taste of what the podcast is aiming for, without it turning into an overwhelming number of episodes in a backlog for you to catch up with. Three down and three to go. Seems manageable, right?

Anyway, enough of the backstory. If you haven’t already subscribed to the Clearleft podcast, you should do that. Then do these three things in whichever order you think works best: