Journal tags: ie

328

sparkline

T E N Ǝ T

Jessica and I went to cinema yesterday.

Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.

But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.

We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.

The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.

At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.

I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.

The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.

The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.

Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”

The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.

Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.

In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.

The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).

The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.

In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).

In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.

Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.

That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.

In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.

The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.

Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.

Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.

It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.

Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.

The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.

The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…

There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.

Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.

This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.

The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.

This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:

It’s essential that you behave as if your decisions matter, even though you know that they don’t.

That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.

As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.

The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.

At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.

Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.

But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?

This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.

So Tenet has three instances of misdirection in its narrative:

  • Inversion isn’t time travel (it absolutely is).
  • Decisions matter (they don’t; there is no free will).
  • The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).

I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.

I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Dream speak

I had a double-whammy of a stress dream during the week.

I dreamt I was at a conference where I was supposed to be speaking, but I wasn’t prepared, and I wasn’t where I was supposed to be when I was supposed to be there. Worse, my band were supposed to be playing a gig on the other side of town at the same time. Not only was I panicking about getting myself and my musical equipment to the venue on time, I was also freaking out because I couldn’t remember any of the songs.

You don’t have to be Sigmund freaking Freud to figure out the meanings behind these kinds of dreams. But usually these kind of stress dreams are triggered by some upcoming event like, say, oh, I don’t know, speaking at a conference or playing a gig.

I felt really resentful when I woke up from this dream in a panic in the middle of the night. Instead of being a topical nightmare, I basically had the equivalent of one of those dreams where you’re back at school and it’s the day of the exam and you haven’t prepared. But! When, as an adult, you awake from that dream, you have that glorious moment of remembering “Wait! I’m not in school anymore! Hallelujah!” Whereas with my double-booked stress dream, I got all the stress of the nightmare, plus the waking realisation that “Ah, shit. There are no more conferences. Or gigs.”

I miss them.

Mind you, there is talk of re-entering the practice room at some point in the near future. Playing gigs is still a long way off, but at least I could play music with other people.

Actually, I got to play music with other people this weekend. The music wasn’t Salter Cane, it was traditional Irish music. We gathered in a park, and played together while still keeping our distance. Jessica has written about it in her latest journal entry:

It wasn’t quite a session, but it was the next best thing, and it was certainly the best we’re going to get for some time. And next week, weather permitting, we’ll go back and do it again. The cautious return of something vaguely resembling “normality”, buoying us through the hot days of a very strange summer.

No chance of travelling to speak at a conference though. On the plus side, my carbon footprint has never been lighter.

Online conferences continue. They’re not the same, but they can still be really worthwhile in their own way.

I’ll be speaking at An Event Apart: Front-end Focus on Monday, August 17th (and I’m very excited to see Ire’s talk). I’ll be banging on about design principles for the web:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

Tickets are $350 but I can get you a discount. Use the code AEAJER to get $50 off.

I wonder if I’ll have online-appropriate stress dreams in the next week? “My internet is down!”, “I got the date and time wrong!”, “I’m not wearing any trousers!”

Actually, that’s pretty much just my waking life these days.

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Hey now

Progressive enhancement is at the heart of everything I do on the web. It’s the bedrock of my speaking and writing too. Whether I’m writing about JavaScript, Ajax, HTML, or service workers, it’s always through the lens of progressive enhancement. Sometimes I explicitly bang the drum, like with Resilient Web Design. Other times I don’t mention it by name at all, and instead talk only about its benefits.

I sometimes get asked to name some examples of sites that still offer their core functionality even when JavaScript fails. I usually mention Amazon.com, although that has other issues. But quite often I find that a lot of the examples I might mention are dismissed as not being “web apps” (whatever that means).

The pushback I get usually takes the form of “Well, that approach is fine for websites, but it wouldn’t work something like Gmail.”

It’s always Gmail. Which is odd. Because if you really wanted to flummox me with a product or service that defies progressive enhancement, I’d have a hard time with something like, say, a game (although it would be pretty cool to build a text adventure that’s progressively enhanced into a first-person shooter). But an email client? That would work.

Identify core functionality.

Read emails. Write emails.

Make that functionality available using the simplest possible technology.

HTML for showing a list of emails, HTML for displaying the contents of the HTML, HTML for the form you write the response in.

Enhance!

Now add all the enhancements that improve the experience—keyboard shortcuts; Ajax instead of full-page refreshes; local storage, all that stuff.

Can you build something that works just like Gmail without using any JavaScript? No. But that’s not what progressive enhancement is about. It’s about providing the core functionality (reading and writing emails) with the simplest possible technology (HTML) and then enhancing using more powerful technologies (like JavaScript).

Progressive enhancement isn’t about making a choice between using simpler more robust technologies or using more advanced features; it’s about using simpler more robust technologies and then using more advanced features. Have your cake and eat it.

Fortunately I no longer need to run this thought experiment to imagine what it would be like if something like Gmail were built with a progressive enhancement approach. That’s what HEY is.

Sam Stephenson describes the approach they took:

HEY’s UI is 100% HTML over the wire. We render plain-old HTML pages on the server and send them to your browser encoded as text/html. No JSON APIs, no GraphQL, no React—just form submissions and links.

If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.

See? It’s not either resilient or modern—it’s resilient and modern. Have your cake and eat it.

And yet this supremely sensible approach is not considered “modern” web development:

The architecture astronauts who, for the past decade, have been selling us on the necessity of React, Redux, and megabytes of JS, cannot comprehend the possibility of building an email app in 2020 with server-rendered HTML.

HEY isn’t perfect by any means—they’ve got a lot of work to do on their accessibility. But it’s good to have a nice short answer to the question “But what about something like Gmail?”

It reminds me of responsive web design:

When Ethan Marcotte demonstrated the power of responsive design, it was met with resistance. “Sure, a responsive design might work for a simple personal site but there’s no way it could scale to a large complex project.”

Then the Boston Globe launched its responsive site. Microsoft made their homepage responsive. The floodgates opened again.

It’s a similar story today. “Sure, progressive enhancement might work for a simple personal site, but there’s no way it could scale to a large complex project.”

The floodgates are ready to open. We just need you to create the poster child for resilient web design.

It looks like HEY might be that poster child.

I have to wonder if its coincidence or connected that this is a service that’s also tackling ethical issues like tracking? Their focus is very much on people above technology. They’ve taken a human-centric approach to their product and a human-centric approach to web development …because ultimately, that’s what progressive enhancement is.

Design systems on the Clearleft podcast

If you’ve already subscribed to the Clearleft podcast, thank you! The first episode is sliding into your podcast player of choice.

This episode is all about …design systems!

I’m pretty happy with how this one turned out, although as it’s the first one, I’m sure I’ll learn how to do this better. I may end up looking back at this first foray with embarrassment. Still, it’s fairly representative of what you can expect from the rest of the season.

This episode is fairly short. Just under eighteen minutes. That doesn’t mean that other episodes will be the same length. Each episode will be as long (or as short) as it needs to be. Form follows function, or in this case, episode length follows content. Other episodes will be longer. Some might be shorter. It all depends on the narrative.

This flies in the face of accepted wisdom when it comes to podcasting. The watchword that’s repeated again and again for aspiring podcasters is consistency. Release on a consistent schedule and have a consistent length for episodes. I kind of want to go against that advice just out of sheer obstinancy. If I end up releasing episodes on a regular schedule, treat it as coincidence rather than consistency.

There’s not much of me in this episode. And there won’t be much of me in most episodes. I’m just there to thread together the smart soundbites coming from other people. In this episode, the talking heads are my colleagues Jon and James, along with my friends and peers Charlotte, Paul, and Amy (although there’s a Clearleft connection with all of them: Charlotte and Paul used to be Clearlefties, and Amy spoke at Patterns Day and Sofa Conf).

I spoke to each of them for about an hour, but like I said, the entire episode is less than eighteen minutes long. The majority of our conversations ended up on the cutting room floor (possibly to be used in future episodes).

Most of my time was spent on editing. It was painstaking, but rewarding. There’s a real pleasure to be had in juxtaposing two snippets of audio, either because they echo one another or because they completely contradict one another. This episode has a few examples of contradictions, and I think those are my favourite moments.

Needless to say, eighteen minutes was not enough time to cover everything about design systems. Quite the opposite. It’s barely an introduction. This is definitely a topic that I’ll be returning to. Maybe there could even be a whole season on design systems. Let me know what you think.

Oh, and you’ll notice that there’s a transcript for the episode. That’s a no-brainer. I’m a big fan of the spoken word, but it really comes alive when it’s combined with searchable, linkable, accessible text.

Anyway, have a listen and if you’re not already subscribed, pop the RSS feed into your podcast player.

Custom properties

I made the website for the Clearleft podcast last week. The design is mostly lifted straight from the rest of the Clearleft website. The main difference is the masthead. If the browser window is wide enough, there’s a background image on the right hand side.

I mostly added that because I felt like the design was a bit imbalanced without something there. On the home page, it’s a picture of me. Kind of cheesy. But the image can be swapped out. On other pages, there are different photos. All it takes is a different class name on that masthead.

I thought about having the image be completely random (and I still might end up doing this). I’d need to use a bit of JavaScript to choose a class name at random from a list of possible values. Something like this:

var names = ['jeremy','katie','rich','helen','trys','chris'];
var name = names[Math.floor(Math.random() * names.length)];
document.querySelector('.masthead').classList.add(name);

(You could paste that into the dev tools console to see it in action on the podcast site.)

Then I read something completely unrelated. Cassie wrote a fantastic article on her site called Making lil’ me - part 1. In it, she describes how she made the mouse-triggered animation of her avatar in the footer of her home page.

It’s such a well-written technical article. She explains the logic of what she’s doing, and translates that logic into code. Then, after walking you through the native code, she shows how you could use the Greeksock library to achieve the same effect. That’s the way to do it! Instead of saying, “Here’s a library that will save you time—don’t worry about how it works!”, she’s saying “Here’s it works without a library; here’s how it works with a library; now you can make an informed choice about what to use.” It’s a very empowering approach.

Anyway, in the article, Cassie demonstrates how you can use custom properties as a bridge between JavaScript and CSS. JavaScript reads the mouse position and updates some custom properties accordingly. Those same custom properties are used in CSS for positioning. Voila! Now you’ve got the position of an element responding to mouse movements.

That’s what made me think of the code snippet I wrote above to update a class name from JavaScript. I automatically thought of updating a class name because, frankly, that’s how I’ve always done it. I’d say about 90% of the DOM scripting I’ve ever done involves toggling the presence of class values: accordions, fly-out menus, tool-tips, and other progressive disclosure patterns.

That’s fine. But really, I should try to avoid touching the DOM at all. It can have performance implications, possibly triggering unnecessary repaints and reflows.

Now with custom properties, there’s a direct line of communication between JavaScript and CSS. No need to use the HTML as a courier.

This made me realise that I need to be aware of automatically reaching for a solution just because that’s the way I’ve done something in the past. I should step back and think about the more efficient solutions that are possible now.

It also made me realise that “CSS variables” is a very limiting way of thinking about custom properties. The fact that they can be updated in real time—in CSS or JavaScript—makes them much more powerful than, say, Sass variables (which are more like constants).

But I too have been guilty of underselling them. I almost always refer to them as “CSS custom properties” …but a lot of their potential comes from the fact that they’re not confined to CSS. From now on, I’m going to try calling them custom properties, without any qualification.

Feeds

A little while back, Marcus Herrmann wrote about making RSS more visible again with a /feeds page. Here’s his feeds page. Here’s Remy’s.

Seems like a good idea to me. I’ve made mine:

adactio.com/feeds

As well as linking to the usual RSS feeds (blog posts, links, notes), it’s also got an explanation of how you can subscribe to a customised RSS feed using tags.

Then, earlier today, I was chatting with Matt on Twitter and he asked:

btw do you share your blogroll anywhere?

So now I’ve added another URL:

adactio.com/feeds/subscriptions

That’s got a link to my OPML file, exported from my feed reader, and a list of the (current) RSS feeds that I’m subscribed to.

I like the idea of blogrolls making a comeback. And webrings.

The Machines Stop

The Situation feels like it’s changing. It’s not over, not by a long shot. But it feels like it’s entering a different, looser phase.

Throughout the lockdown, there’s been a strange symmetry between the outside world and the inside of our home. As the outside world slowed to a halt, so too did half the machinery in our flat. Our dishwasher broke shortly before the official lockdown began. So did our washing machine.

We had made plans for repairs and replacements, but as events in the world outside escalated, those plans had to be put on hold. Plumbers and engineers weren’t making any house calls, and rightly so.

We even had the gas to our stovetop cut off for a while—you can read Jessica’s account of that whole affair. All the breakdowns just added to the entropic Ballardian mood.

But the gas stovetop was fixed. And so too was the dishwasher, eventually. Just last week, we got our new washing machine installed. Piece by piece, the machinery of our interier world revived in lockstep with the resucitation of the world outside.

As of today, pubs will be open. I won’t be crossing their thresholds just yet. We know so much more about the spread of the virus now, and gatherings of people in indoor spaces are pretty much the worst environments for transmission.

I’m feeling more sanguine about outdoor spaces. Yesterday, Jessica and I went into town for Street Diner. It was the first time since March that we walked in that direction—our other excursions have been in the direction of the countryside.

It was perfectly fine. We wore masks, and while we were certainly in the minority, we were not alone. People were generally behaving responsibly.

Brighton hasn’t done too badly throughout The Situation. But still, like I said, I have no plans to head to the pub on a Saturday night. The British drinking culture is very much concentrated on weekends. Stay in all week and then on the weekend, lassen die Sau raus!, as the Germans would say.

After months of lockdown, reopening pubs on a Saturday seems like a terrible idea. Over in Ireland, pubs have been open since Monday—a sensible day to soft-launch. With plenty of precautions in place, things are going well there.

I’ve been watching The Situation in Ireland throughout. It’s where my mother lives, so I was understandably concerned. But they’ve handled everything really well. It’s not New Zealand, but it’s also not the disaster that is the UK.

It really has been like watching an A/B test run at the country level. Two very similar populations confronted with exactly the same crisis. Ireland took action early, cancelling the St. Patrick’s Day parade(!) while the UK was still merrily letting Cheltenham go ahead. Ireland had clear guidance. The UK had dilly-dallying and waffling. And when the shit really hit the fan, the Irish taoiseach rolled up his sleeves and returned to medical work. Meanwhile the UK had Dominic Cummings making a complete mockery of the sacrifices that everyone was told to endure.

What’s strange is that people here in the UK don’t seem to realise how the rest of the world, especially other European countries, have watched the response here with shock and horror. The narrative here seems to be that we all faced this thing together, and with our collective effort, we averted the worst. But the numbers tell a very different story. Comparing the numbers here with the numbers in Ireland—or pretty much any other country in Europe—is sobering.

So even though the timelines for reopenings here converge with Ireland’s, The Situation is far from over.

Even without any trips to pubs, restaurants, or other indoor spaces, I’m looking forward to making some more excursions into town. Not that it’s been bad staying at home. I’ve really quite enjoyed staying put, playing music, reading books, and watching television.

I was furloughed from work for a while in June. Normally, my work at this time of year would involve plenty of speaking at conferences. Seeing as that wasn’t happening, it made sense to take advantage of the government scheme to go into work hibernation for a bit.

I was worried I might feel at a bit of a loose end, but I actually really enjoyed it. The weather was good so I spent quite a bit of time just sitting in the back garden, reading (I am very, very grateful to have even a small garden). I listened to music. I watched movies. I surfed the web. Yes, properly surfed the web, going from link to link, get lost down rabbit holes. I tell you, this World Wide Web thing is pretty remarkable. Some days I used it to read up on science or philosophy. I spent a week immersed in Napoleonic history. I have no idea how or why. But it was great.

I’m back at work now, and have been for a couple of weeks. But I wouldn’t mind getting furloughed again. It felt kind of like being retired. I’m quite okay with the propsect of retirement now, as long as we have music and sunshine and the World Wide Web.

That’s the future. For now, The Situation continues, albeit in looser form.

I’ve really enjoyed reading other people’s accounts throughout. My RSS reader is getting a good workout. I always look forward to weeknotes from Alice, Nat, and Phil (this piece from Phil has really stuck with me). Jessica has written fifteen installments—and counting—of A Journal of the Plague Week. I know I’m biased, but I think it’s some mighty fine writing. Start here.

Dark mode revisited

I added a dark mode to my website a while back. It was a fun thing to do during Indie Web Camp Amsterdam last year.

I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.

But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”

But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”

Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.

To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.

So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.

CSS custom properties and the cascade

When I wrote about programming CSS to perform Sass colour functions I said this about the brilliant Lea Verou:

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

Well, it happened again. This time I was reading her post about hybrid positioning with CSS variables and max() . But the main topic of the post wasn’t the part that made go “Huh! I never knew that!”. Towards the end of her article she explained something about the way that browsers evaluate CSS custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

I’m used to being able to rely on the cascade. Let’s say I’m going to set a background colour on paragraphs:

p {
  background-color: red;
  background-color: color(display-p3 1 0 0);
}

First I’ve set a background colour using a good ol’ fashioned keyword, supported in browsers since day one. Then I declare the background colour using the new-fangled color() function which is supported in very few browsers. That’s okay though. I can confidently rely on the cascade to fall back to the earlier declaration. Paragraphs will still have a red background colour.

But if I store the background colour in a custom property, I can no longer rely on the cascade.

:root {
  --myvariable: color(display-p3 1 0 0);
}
p {
  background-color: red;
  background-color: var(--myvariable);
}

All I’ve done is swapped out the hard-coded color() value for a custom property but now the browser behaves differently. Instead of getting a red background colour, I get the browser default value. As Lea explains:

…it will make the property invalid at computed value time.

The spec says:

When this happens, the computed value of the property is either the property’s inherited value or its initial value depending on whether the property is inherited or not, respectively, as if the property’s value had been specified as the unset keyword.

So if a browser doesn’t understand the color() function, it’s as if I’ve said:

background-color: unset;

This took me by surprise. I’m so used to being able to rely on the cascade in CSS—it’s one of the most powerful and most useful features in this programming language. Could it be, I wondered, that the powers-that-be have violated the principle of least surprise in specifying this behaviour?

But a note in the spec explains further:

Note: The invalid at computed-value time concept exists because variables can’t “fail early” like other syntax errors can, so by the time the user agent realizes a property value is invalid, it’s already thrown away the other cascaded values.

Ah, right! So first of all browsers figure out the cascade and then they evaluate custom properties. If a custom property evaluates to gobbledygook, it’s too late to figure out what the cascade would’ve fallen back to.

Thinking about it, this makes total sense. Remember that CSS custom properties aren’t like Sass variables. They aren’t evaluated once and then set in stone. They’re more like let than const. They can be updated in real time. You can update them from JavaScript too. It’s entirely possible to update CSS custom properties rapidly in response to events like, say, the user scrolling or moving their mouse. If the browser had to recalculate the cascade every time a custom property didn’t evaluate correctly, I imagine it would be an enormous performance bottleneck.

So even though this behaviour surprised me at first, it makes sense on reflection.

I’ve probably done a terrible job explaining the behaviour here, so I’ve made a Codepen. Although that may also do an equally terrible job.

(Thanks to Amber for talking through this with me and encouraging me to blog about it. And thanks to Lea for expanding my mind. Again.)

Programming CSS to perform Sass colour functions

I wrote recently about moving away from Sass to using native CSS features. I had this to say on the topic of mixins in Sass:

These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

I know we will be getting these in the future but we’re not there yet with CSS.

Anyway, I had all this in the back of my mind when I was reading Lea’s excellent feature in this month’s Increment: A user’s guide to CSS variables. She’s written about a really clever technique of combining custom properites with hsl() colour values for creating colour palettes. (See also: Una’s post on dynamic colour theming with pure CSS.)

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

I immediately set about implementing this technique over on The Session. The trick here is to use separate custom properties for the hue, saturation, and lightness parts of hsl() colour values. Then, when you want to lighten or darken the colour—say, on hover—you can update the lightness part.

I’ve made a Codepen to show what I’m doing.

Let’s say I’m styling a button element. I make custom propertes for hsl() values:

button {
  --button-colour-hue: 19;
  --button-colour-saturation: 82%;
  --button-colour-lightness: 38%;
  background-color: hsl(
    var(--button-colour-hue),
    var(--button-colour-saturation),
    var(--button-colour-lightness)
  );
}

For my buttons, I want the borders to be slightly darker than the background colour. When I was using Sass, I used the darken() function to this. Now I use calc(). Here’s how I make the borders 10% darker:

border-color: hsl(
  var(--button-colour-hue),
  var(--button-colour-saturation),
  calc(var(--button-colour-lightness) - 10%)
);

That calc() function is substracting a percentage from a percentage: 38% minus 10% in this case. The borders will have a lightness of 28%.

I make the bottom border even darker and the top border lighter to give a feeling of depth.

On The Session there’s a “cancel” button style that’s deep red.

Here’s how I set its colour:

.cancel {
  --button-colour-hue: 0;
  --button-colour-saturation: 100%;
  --button-colour-lightness: 40%;
}

That’s it. The existing button declarations take care of assigning the right shades for the border colours.

Here’s another example. Site admins see buttons for some actions only available to them. I want those buttons to have their own colour:

.admin {
  --button-colour-hue: 45;
  --button-colour-saturation: 100%;
  --button-colour-lightness: 40%;
}

You get the idea. It doesn’t matter how many differently-coloured buttons I create, the effect of darkening or lightening their borders is all taken care of.

So it turns out that the lighten() and darken() functions from Sass are available to us in CSS by using a combination of custom properties, hsl(), and calc().

I’m also using this combination to lighten or darken background and border colours on :hover. You can poke around the Codepen if you want to see that in action.

I love seeing the combinatorial power of these different bits of CSS coming together. It really is a remarkably powerful programming language.

Hard to break

I keep thinking about some feedback that Cassie received recently.

She had delivered the front-end code for a project at Clearleft, and—this being Cassie we’re talking about—the code was rock solid. The client’s Quality Assurance team came back with the verdict that it was “hard to break.”

Hard to break. I love that. That might be the best summation I’ve heard for describing resilience on the web.

If there’s a corollary to resilient web design, it would be brittle web design. In a piece completely unrelated to web development, Jamais Cascio describes brittle systems:

When something is brittle, it’s susceptible to sudden and catastrophic failure.

That sounds like an inarguably bad thing. So why would anyone end up building something in a brittle way? Jamais Cascio continues:

Things that are brittle look strong, may even be strong, until they hit a breaking point, then everything falls apart.

Ah, there’s the rub! It’s not that brittle sites don’t work. They work just fine …until they don’t.

Brittle systems are solid until they’re not. Brittleness is illusory strength. Things that are brittle are non-resilient, sometimes even anti-resilient — they can make resilience more difficult.

Kilian Valkhof makes the same point when it comes to front-end development. For many, accessibility is an unknown unknown:

When you start out it’s you, notepad and a browser against the world. You open up that notepad, and you type

<div onclick="alert('hello world');">Click me!</div>

You fire up your browser, you click your div and …it works! It just works! Awesome. You open up the devtools. No errors. Well done! Clearly you did a good job. On to the next thing.

At the surface level, there’s no discernable difference between a resilient solution and a brittle one:

For all sorts of reasons, both legitimate and, as always, weird browser legacy reasons, a clickable div will mostly work. Well enough to fool someone starting out anyway.

If everything works, how would they know it kinda doesn’t?

Killian goes on to suggest ways to try to make this kind of hidden brittleness more visible.

Furthermore we could envision a browser that is much stricter when developing.

This something I touched on when I was talking about web performance with Gerry on his podcast:

There’s a disconnect in the process we go through when we’re making something, and then how that thing is experienced when it’s actually on the web, which is dependent on network speeds and processing speeds and stuff.

I spend a lot of time wondering why so many websites are badly built. Sure, there’s a lot can be explained by misaligned priorities. And it could just be an expression of Sturgeon’s Law—90% of websites are crap because 90% of everything is crap. But I’ve also come to realise that even though resilience is the antithesis to brittleness, they both share something in common: they’re invisible.

We have a natural bias towards what’s visible. Being committed to making sure something is beautiful to behold is, in some ways, the easy path to travel. But being committed to making sure something is also hard to break? That takes real dedication.

Photograph

Do you have a favourite non-personal photograph?

By non-personal, I mean one that isn’t directly related to your life; photographs of family members, friends, travel (remember travel?).

Even discounting those photographs, there’s still a vast pool of candidates. There are all the amazing pictures taken by photojournalists like Lee Miller. There’s all the awe-inspiring wildlife photography out there. Then there are the kind of posters that end up on bedroom walls, like Robert Doisneau’s The Kiss.

One of my favourite photographs of all time has music as its subject matter. No, not Johnny Cash flipping the bird, although I believe this picture to be just as rock’n’roll.

In the foreground, Séamus Ennis sits with his pipes. In the background, Jean Ritchie is leaning intently over her recording equipment.

This is a photograph of Séamus Ennis and Jean Ritchie. It was probably taken around 1952 or 1953 by Ritchie’s husband, George Pickow, when Jean Ritchie and Alan Lomax were in Ireland to do field recordings.

I love everything about it.

Séamus Ennis looks genuinely larger than life (which, by all accounts, he was). And just look at the length of those fingers! Meanwhile Jean Ritchie is equally indominatable, just as much as part of the story as the musician she’s there to record.

Both of them have expressions that convey how intent they are on their machines—Ennis’s uilleann pipes and Ritchie’s tape recorder. It’s positively steampunk!

What a perfect snapshot of tradition and technology meeting slap bang in the middle of the twentieth century.

Maybe that’s why I love it so much. One single photograph is filled with so much that’s dear to me—traditional Irish music meets long-term archival preservation.

Television

What a time, as they say, to be alive. The Situation is awful in so many ways, and yet…

In this crisis, there is also opportunity—the opportunity to sit on the sofa, binge-watch television and feel good about it! I mean just think about it: when in the history of our culture has there been a time when the choice between running a marathon or going to the gym or staying at home watching TV can be resolved with such certitude? Stay at home and watch TV, of course! It’s the only morally correct choice. Protect the NHS! Save lives! Gorge on box sets!

What you end up watching doesn’t really matter. If you want to binge on Love Island or Tiger King, go for it. At this moment in time, it’s all good.

I had an ancient Apple TV device that served me well for years. At the beginning of The Situation, I decided to finally upgrade to a more modern model so I could get to more streaming services. Once I figured out how to turn off the unbelievably annoying sounds and animations, I got it set up with some subscription services. Should it be of any interest, here’s what I’ve been watching in order to save lives and protect the NHS…

Watchmen, Now TV

Superb! I suspect you’ll want to have read Alan Moore’s classic book to fully enjoy this series set in the parallel present extrapolated from that book’s ‘80s setting. Like that book, what appears to be a story about masked vigilantes is packing much, much deeper themes. I have a hunch that if Moore himself were forced to watch it, he might even offer some grudging approval.

Devs, BBC iPlayer

Ex Machina meets The Social Network in Alex Garland’s first TV show. I was reading David Deutsch while I was watching this, which felt like getting an extra bit of world-building. I think this might have worked better in the snappier context of a film, but it makes for an enjoyable saunter as a series. Style outweighs substance, but the style is strong enough to carry it.

Breeders, Now TV

Genuinely hilarious. Watch the first episode and see how many times you laugh guiltily. It gets a bit more sentimental later on, but there’s a wonderfully mean streak throughout that keeps the laughter flowing. If you are a parent of small children though, this may feel like being in a rock band watching Spinal Tap—all too real.

The Mandalorian, Disney Plus

I cannot objectively evaluate this. I absolutely love it, but that’s no surprise. It’s like it was made for me. The execution of each episode is, in my biased opinion, terrific. Read what Nat wrote about it. I agree with everything they said.

Westworld, Now TV

The third series is wrapping up soon. I’m enjoying this series immensely. It’s got a real cyberpunk sensibility; not in a stupid Altered Carbon kind of way, but in a real Gibsonian bit of noirish fun. Like Devs, it’s not as clever as it thinks it is, but it’s throroughly entertaining all the same.

Tales From The Loop, Amazon Prime

The languid pacing means this isn’t exactly a series of cliffhangers, but it will reward you for staying with it. It avoids the negativity of Black Mirror and instead maintains a more neutral viewpoint on the unexpected effects of technology. At its best, it feels like an updated take on Ray Bradbury’s stories of smalltown America (like the episode directed by Jodie Foster featuring a cameo by Shane Carruth—the time traveller’s time traveller).

Years and Years, BBC iPlayer

A near-future family and political drama by Russell T Davies. Subtlety has never been his strong point and the polemic aspects of this are far too on-the-nose to take seriously. Characters will monologue for minutes while practically waving a finger at you out of the television set. But it’s worth watching for Emma Thompson’s performance as an all-too believable populist politician. Apart from a feelgood final episode, it’s not light viewing so maybe not the best quarantine fodder.

For All Mankind, Apple TV+

An ahistorical space race that’s a lot like Mary Robinette Kowal’s Lady Astronaut books. The initial premise—that Alexei Leonov beats Neil Armstrong to a moon landing—is interesting enough, but it really picks up from episode three. Alas, the baton isn’t really kept up for the whole series; it reverts to a more standard kind of drama from about halfway through. Still worth seeing though. It’s probably the best show on Apple TV+, but that says more about the paucity of the selection on there than it does about the quality of this series.

Avenue Five, Now TV

When it’s good, this space-based comedy is chucklesome but it kind of feels like Armando Iannucci lite.

Picard, Amazon Prime

It’s fine. Michael Chabon takes the world of Star Trek in some interesting directions, but it never feels like it’s allowed to veer too far away from the established order.

The Outsider, Now TV

A tense and creepy Stephen King adaption. I enjoyed the mystery of the first few episodes more than the later ones. Once the supernatural rules are established, it’s not quite as interesting. There are some good performances here, but the series gives off a vibe of believing it’s more important than it really is.

Better Call Saul, Netflix

The latest series (four? I’ve lost count) just wrapped up. It’s all good stuff, even knowing how some of the pieces need to slot into place for Breaking Bad.

Normal People, BBC iPlayer

I heard this was good so I went to the BBC iPlayer app and hit play. “Pretty good stuff”, I thought after watching that episode. Then I noticed that it said Episode Twelve. I had watched the final episode first. Doh! But, y’know, watching from the start, the foreknowledge of how things turn out isn’t detracting from the pleasure at all. In fact, I think you could probably watch the whole series completely out of order. It’s more of a tone poem than a plot-driven series. The characters themselves matter more than what happens to them.

Hunters, Amazon Prime

A silly 70s-set jewsploitation series with Al Pacino. The enjoyment comes from the wish fulfillment of killing nazis, which would be fine except for the way that the holocaust is used for character development. The comic-book tone of the show clashes very uncomfortably with that subject matter. The Shoah is not a plot device. This series feels like what we would get if Tarentino made television (and not in a good way).

Modified machete

The Rise Of Skywalker arrives on Disney Plus on the fourth of May (a date often referred to as Star Wars Day, even though May 25th is and always will be the real Star Wars Day). Time to begin a Star Wars movie marathon. But in which order?

Back when there were a mere two trilogies, this was already a vexing problem if someone were watching the films for the first time. You could watch the six films in episode order:

  1. The Phantom Menace
  2. Attack Of The Clones
  3. Revenge Of The Sith
  4. A New Hope
  5. The Empire Strikes Back
  6. The Return Of The Jedi

But then you’re spoiling the grand reveal in episode five.

Alright then, how about release order?

  1. A New Hope
  2. The Empire Strikes Back
  3. Return Of The Jedi
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith

But then you’re front-loading the big pay-off, and you’re finishing with a big set-up.

This conundrum was solved with the machete order. It suggests omitting The Phantom Menace, not because it’s crap, but because nothing happens in it that isn’t covered in the first five minutes of Attack Of The Clones. The machete order is:

  1. A New Hope
  2. The Empire Strikes Back
  3. Attack Of The Clones
  4. Revenge Of The Sith
  5. Return Of The Jedi

It’s kind of brilliant. You get to keep the big reveal in The Empire Strikes Back, and then through flashback, you see how this came to be. Best of all, the pay-off in Return Of The Jedi has even more resonance because you’ve just seen Anakin’s downfall in Revenge Of The Sith.

With the release of the new sequel trilogy, an adjusted machete order is a pretty straightforward way to see the whole saga:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. The Last Jedi
  9. The Rise Of Skywalker

Done. But …what if you want to include the standalone films too?

If you slot them in in release order, they break up the flow:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. Rogue One
  9. The Last Jedi
  10. Solo
  11. The Rise Of Skywalker

I’m planning to watch all eleven films. This was my initial plan:

  1. Rogue One
  2. A New Hope
  3. The Empire Strikes Back
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith
  7. Solo
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

I definitely want to have Rogue One lead straight into A New Hope. The problem is where to put Solo. I don’t want to interrupt the Sith/Jedi setup/payoff.

So here’s my current plan, which I have already begun:

  1. Solo
  2. Rogue One
  3. A New Hope
  4. The Empire Strikes Back
  5. The Phantom Menace
  6. Attack Of The Clones
  7. Revenge Of The Sith
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

This way, the two standalone films work as world-building for the saga and don’t interrupt the flow once the main story is underway.

I think this works pretty well. Neither Solo nor Rogue One require any prior knowledge to be enjoyed.

And just in case you’re thinking that perhaps I’m overthinking it a bit and maybe I’ve got too much time on my hands …the world has too much time on its hands right now! Thanks to The Situation, I can not only take the time to plan and execute the viewing order for a Star Wars movie marathon, I can feel good about it. Stay home, they said. Literally saving lives, they said. Happy to oblige!

Principles and priorities

I think about design principles a lot. I’m such a nerd for design principles, I even have a collection. I’m not saying all of the design principles in the collection are good—far from it! I collect them without judgement.

As for what makes a good design principle, I’ve written about that before. One aspect that everyone seems to agree on is that a design principle shouldn’t be an obvious truism. Take this as an example:

Make it usable.

Who’s going to disagree with that? It’s so agreeable that it’s practically worthless as a design principle. But now take this statement:

Usability is more important than profitability.

Ooh, now we’re talking! That’s controversial. That’s bound to surface some disagreement, which is a good thing. It’s now passing the reversability test—it’s not hard to imagine an endeavour driven by the opposite:

Profitability is more important than usability.

In either formulation, what makes these statements better than the bland toothless agreeable statements—“Usability is good!”, “Profitability is good!”—is that they introduce the element of prioritisation.

I like design principles that can be formulated as:

X, even over Y.

It’s not saying that Y is unimportant, just that X is more important:

Usability, even over profitability.

Or:

Profitability, even over usability.

Design principles formulated this way help to crystalise priorities. Chris has written about the importance of establishing—and revisiting—priorities on any project:

Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.

I’ve said it many times, but one on my favourite design principles comes from the HTML design principles. The priority of consitituencies (it’s got “priorities” right there in the name!):

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

Or put another way:

  • Users, even over authors.
  • Authors, even over implementors.
  • Implementors, even over specifiers.
  • Specifiers, even over theoretical purity.

When it comes to evaluating technology for the web, I think there are a number of factors at play.

First and foremost, there’s the end user. If a technology choice harms the end user, avoid it. I’m thinking here of the kind of performance tax that a user has to pay when developers choose to use megabytes of JavaScript.

Mind you, some technologies have no direct effect on the end user. When it comes to build tools, version control, toolchains …all the stuff that sits on your computer and never directly interacts with users. In that situation, the wants and needs of developers can absolutely take priority.

But as a general principle, I think this works:

User experience, even over developer experience.

Sadly, I think the current state of “modern” web development reverses that principle. Developer efficiency is prized above all else. Like I said, that would be absolutely fine if we’re talking about technologies that only developers are exposed to, but as soon as we’re talking about shipping those technologies over the network to end users, it’s negligent to continue to prioritise the developer experience.

I feel like personal websites are an exception here. What you do on your own website is completely up to you. But once you’re taking a paycheck to make websites that will be used by other people, it’s incumbent on you to realise that it’s not about you.

I’ve been talking about developers here, but this is something that applies just as much to designers. But I feel like designers go through that priority shift fairly early in their career. At the outset, they’re eager to make their mark and prove themselves. As they grow and realise that it’s not about them, they understand that the most appropriate solution for the user is what matters, even if that’s a “boring” tried-and-tested pattern that isn’t going to wow any fellow designers.

I’d like to think that developers would follow a similar progression, and I’m sure that some do. But I’ve seen many senior developers who have grown more enamoured with technologies instead of honing in on the most appropriate technology for end users. Maybe that’s because in many organisations, developers are positioned further away from the end users (whereas designers are ideally being confronted with their creations being used by actual people). If a lead developer is focused on the productivity, efficiency, and happiness of the dev team, it’s no wonder that their priorities end up overtaking the user experience.

I realise I’m talking in very binary terms here: developer experience versus user experience. I know it’s not always that simple. Other priorities also come into play, like business needs. Sometimes business needs are in direct conflict with user needs. If an online business makes its money through invasive tracking and surveillance, then there’s no point in having a design principle that claims to prioritise user needs above all else. That would be a hollow claim, and the design principle would become worthless.

Because that’s the point with design principles. They’re there to be used. They’re not a nice fluffy exercise in feeling good about your work. The priority of constituencies begins, “in case of conflict” and that’s exactly when a design principle matters—when it’s tested.

Suppose someone with a lot of clout in your organisation makes a decision, but that decision conflicts with your organisations’s design principles. Instead of having an opinion-based argument about who’s right or wrong, the previously agreed-upon design principles allow you to take ego out of the equation.

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Reading

At the beginning of the year, Remy wrote about extracting Goodreads metadata so he could create his end-of-year reading list. More recently, Mark Llobrera wrote about how he created a visualisation of his reading history. In his case, he’s using JSON to store the information.

This kind of JSON storage is exactly what Tom Critchlow proposes in his post, Library JSON - A Proposal for a Decentralized Goodreads:

Thinking through building some kind of “web of books” I realized that we could use something similar to RSS to build a kind of decentralized GoodReads powered by indie sites and an underlying easy to parse format.

His proposal looks kind of similar to what Mark came up with. There’s a title, an author, an image, and some kind of date for when you started and/or finished reading the book.

Matt then points out that RSS gets close to the data format being suggested and asks how about using RSS?:

Rather than inventing a new format, my suggestion is that this is RSS plus an extension to deal with books. This is analogous to how the podcast feeds are specified: they are RSS plus custom tags.

Like Matt, I’m in favour of re-using existing wheels rather than inventing new ones, mostly to avoid a 927 situation.

But all of these proposals—whether JSON or RSS—involve the creation of a separate file, and yet the information is originally published in HTML. Along the lines of Matt’s idea, I could imagine extending the h-entry collection of class names to allow for books (or films, or other media). It already handles images (with u-photo). I think the missing fields are the date-related ones: when you start and finish reading. Those fields are present in a different microformat, h-event in the form of dt-start and dt-end. Maybe they could be combined:


<article class="h-entry h-event h-review">
<h1 class="p-name p-item">Book title</h1>
<img class="u-photo" src="image.jpg" alt="Book cover.">
<p class="p-summary h-card">Book author</p>
<time class="dt-start" datetime="YYYY-MM-DD">Start date</time>
<time class="dt-end" datetime="YYYY-MM-DD">End date</time>
<div class="e-content">Remarks</div>
<data class="p-rating" value="5">★★★★★</data>
<time class="dt-published" datetime="YYYY-MM-DDThh:mm">Date of this post</time>
</article>

That markup is simultaneously a post (h-entry) and an event (h-event) and you can even throw in h-card for the book author (as well as h-review if you like to rate the books you read). It can be converted to RSS and also converted to .ics for calendars—those parsers are already out there. It’s ready for aggregation and it’s ready for visualisation.

I publish very minimal reading posts here on adactio.com. What little data is there isn’t very structured—I don’t even separate the book title from the author. But maybe I’ll have a little play around with turning these h-entries into combined h-entry/event posts.

Podcasts

I’ve been on a few different podcasts recently.

The tenth episode of the Design Systems podcast is myself and Chris having a back-and-forth about design systems: Overcoming Entropy and Turning Chaos Into Order:

Chris and Jeremy Keith discuss imbuing teams with a shared sense of ownership of their design system, creating design systems able to address unforeseen scenarios, design ops as an essential part of an effective design system, and more.

Gerry has started a new podcast to accompany his new book, World Wide Waste. He invited me on for the first episode: ‘We’ve ruined the Web. Here’s how we fix it.’:

Welcome to World Wide Waste, a podcast about how digital is killing the planet, and what to do about it. In this session, I’m chatting with Jeremy Keith. Jeremy is a philosopher of the internet. Every time I see him speak, I’m struck by his calming presence, his brilliant mind and his deep humanity.

We talked about performance, energy consumption, and digital preservation. We agreed on a lot, but there were also points where we fundamentally disagreed. Good stuff!

If you like the sound of some Irishmen chatting on a podcast, then as well as listening to me and Gerry getting into it, you might also enjoy the episode of The Blarney Pilgrims podcast that I was on:

Jeremy Keith is the founder and keeper of thesession.org, probably the greatest irish music resource in the world. And this episode hopefully has something of the generous essence of that archive. We flow, from The North as a different planet to Galway as the centre of the ’90s slacker world. From the one-tune-a-week origin of thesession.org and managing an online community to the richness and value of constancy.

I’ve already written about how much this meant to me.

On the same topic—Irish music on the web—I made a brief appearance in the latest episode of Shannon Heaton’s Irish Music Stories, Irish Tunes in the Key of C-19:

How are traditional musicians and dancers continuing creative careers and group music events during the Covid-19 pandemic? How is social distancing affecting the jigs and reels? In this unexpected open of Season Four of Irish Music Stories, musicians from Ireland, England, Belgium, Sweden, and the U.S. address on and offline strategies… from a safe distance.

A bit of Blarney

I don’t talk that much on here about my life’s work. Contrary to appearances, my life’s work is not banging on about semantic markup, progressive enhancement, and service workers.

No, my life’s work is connected to Irish traditional music. Not as a musician, I hasten to clarify—while I derive enormous pleasure from playing tunes on my mandolin, that’s more of a release than a vocation.

My real legacy, it turns out, is being the creator and caretaker of The Session, an online community and archive dedicated to Irish traditional music. I might occassionally mention it here, but only when it’s related to performance, accessibility, or some other front-end aspect. I’ve never really talked about the history, meaning, and purpose of The Session.

Well, if you’re at all interested in that side of my life, you can now listen to me blather on about it for over an hour, thanks to the Blarney Pilgrims podcast.

I’ve been huffduffing episodes of this podcast for quite a while now. It’s really quite excellent. If you’re at all interested in Irish traditional music, the interviews with the likes of Kevin Burke, John Carty, Liz Carroll and Catherine McEvoy are hard to beat.

So imagine my surprise when they contacted me to ask me to chat and play some tunes! It really was an honour.

I was also a bit of guinea pig. Normally they’d record these kinds of intimate interviews face to face, but what with The Situation and all, my chat was the first remotely recorded episode.

I’ve been on my fair share of podcasts—most recently the Design Systems Podcast—but this one was quite different. Instead of talking about my work on the web, this focussed on what I was doing before the web came along. So if you don’t want to hear me talking about my childhood, give this a miss.

But if you’re interested in hearing my reminisce and discuss the origin and evolution of The Session, have a listen. The chat is interspersed with some badly-played tunes from me on the mandolin, but don’t let that put you off.