Tags: ev

3183

sparkline

Thursday, January 21st, 2021

Letters of exclusion

I think my co-workers are getting annoyed with me. Any time they use an acronym or initialism—either in a video call or Slack—I ask them what it stands for. I’m sure they think I’m being contrarian.

The truth is that most of the time I genuinely don’t know what the letters stand for. And I’ve got to that age where I don’t feel any inhibition about asking “stupid” questions.

But it’s also true that I really, really dislike acronyms, initialisms, and other kinds of jargon. They’re manifestations of gatekeeping. They demarcate in-groups from outsiders.

Of course if you’re in a conversation with an in-group that has the same background and context as you, then sure, you can use acronyms and initialisms with the confidence that there’s a shared understanding. But how often can you be that sure? The more likely situation—and this scales exponentially with group size—is that people have differing levels of inside knowledge and experience.

I feel sorry for anyone trying to get into the field of web performance. Not only are there complex browser behaviours to understand, there’s also a veritable alphabet soup of initialisms to memorise. Here’s a really good post on web performance by Harry, but notice how the initialisms multiply like tribbles as the post progresses until we’re talking about using CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. And fair play to Harry; he expands each initialism the first time he introduces it.

But are we really saving any time by saying FID instead of first input delay? I suspect that the only reason why the word “cumulative” precedes “layout shift” is just to make it into the three-letter initialism CLS.

Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.

Again, if you’re certain that you’re speaking to peers, that’s fine. But if you’re trying to communicate even a little more widely, then initialisms and abbreviations are obstacles to overcome. And once you’re in the habit of using the short forms, it gets harder and harder to apply context-shifting in your language. So the safest habit to form is to generally avoid using acronyms and initialisms.

Unnecessary initialisms are exclusionary.

Think about on-boarding someone new to your organisation. They’ve already got a lot to wrap their heads around without making them figure out what a TAM is. That’s a real example from Clearleft. We have a regular Thursday afternoon meeting. I call it the Thursday afternoon meeting. Other people …don’t.

I’m trying—as gently as possible—to ensure we’re not being exclusionary in our language. My co-workers indulge me, even it’s just to shut me up.

But here’s the thing. I remember many years back when a job ad went out on the Clearleft website that included the phrase “culture fit”. I winced and explained why I thought that was a really bad phrase to use—one that is used as code for “more people like us”. At the time my concerns were met with eye-rolls and chuckles. Now, as knowledge about diversity and inclusion has become more widespread, everyone understands that using a phrase like “culture fit” can be exclusionary.

But when I ask people to expand their acronyms and initialisms today, I get the same kind of chuckles. My aversion to abbreviations is an eccentric foible to be tolerated.

But this isn’t about me.

Wednesday, January 20th, 2021

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Monday, January 18th, 2021

A minimum viable experience makes for a resilient, inclusive website or app - Post - Piccalilli

The whole idea of progressive enhancement is using the power that the web platform gives us for free—specifically, HTML, CSS and JavaScript—to provide a baseline experience for the people who visit our sites and/or apps, and then build on that where appropriate and necessary, depending on the capabilities of the technology that they are using.

React Bias

Dev perception.

The juxtaposition of The HTTP Archive’s analysis and The State of JS 2020 Survey results suggest that a disproportionately small—yet exceedingly vocal minority—of white male developers advocate strongly for React, and by extension, a development experience that favors thick client/thin server architectures which are given to poor performance in adverse conditions. Such conditions are less likely to be experienced by white male developers themselves, therefore reaffirming and reflecting their own biases in their work.

Saturday, January 16th, 2021

Responsible Web Applications

An excellent collection of advice and examples for making websites responsive and accessibile (responsive + accessible = responsible).

Nested Media Queries – Bram.us

Huh. I don’t think I ever thought about nesting media queries …and yet I’m pleasantly surprised that it works!

Enable/unmute WebAudio on iOS, even while mute switch is on

Remember when I wrote about Web Audio weirdness on iOS? Well, this is a nice little library that wraps up the same hacky solution that I ended up using.

It’s always gratifying when something you do—especially something that feels so hacky—turns out to be independently invented elsewhere.

HTML Video Sources Should Be Responsive | Filament Group, Inc.

Removing media support from HTML video was a mistake.

Damn right! It was basically Hixie throwing a strop, trying to sabotage responsive images. Considering how hard it is usually to remove a shipped feature from browsers, it’s bizarre that a good working feature was pulled out of production.

Wednesday, January 13th, 2021

axe-con Digital Accessibility Conference | Deque

This looks like it’ll be a good event: a keynote from Vint Cerf and talks from Val Head, Rachel Andrew, Sara Soueidan, and others.

Best of all, it’s free!

Friday, January 8th, 2021

Sticky CSS Grid Items | Melanie Richards

This is a useful technique that future me is almost certainly going to need at some point.

Wednesday, January 6th, 2021

Should The Web Expose Hardware Capabilities? — Smashing Magazine

This is a very thoughtful and measured response to Alex’s post Platform Adjacency Theory.

Unlike Alex, the author doesn’t fire off cheap shots.

Also, I’m really intrigued by the idea of certificate authorities for hardware APIs.

Tuesday, January 5th, 2021

new.css

A minimal style sheet that applies some simple rules to HTML elements so you can take a regular web page and drop in this CSS to spruce it up a bit.

Is Progressive Enhancement Dead Yet? (Webbed Briefs)

Heydon’s newest short video is right up my alley.

Monday, January 4th, 2021

Speaking online

I really, really missed speaking at conferences in 2020. I managed to squeeze in just one meatspace presentation before everything shut down. That was in Nottingham, where myself and Remy reprised our double-bill talk, How We Built The World Wide Web In Five Days.

That was pretty much all the travelling I did in 2020, apart from a joyous jaunt to Galway to celebrate my birthday shortly before the Nottingham trip. It’s kind of hilarious to look at a map of the entirety of my travel in 2020 compared to previous years.

Mind you, one of my goals for 2020 was to reduce my carbon footprint. Mission well and truly accomplished there.

But even when travel was out of the question, conference speaking wasn’t entirely off the table. I gave a brand new talk at An Event Apart Online Together: Front-End Focus in August. It was called Design Principles For The Web and I’ve just published a transcript of the presentation. I’m really pleased with how it turned out and I think it works okay as an article as well as a talk. Have a read and see what you think (or you can listen to the audio if you prefer).

Giving a talk online is …weird. It’s very different from public speaking. The public is theoretically there but you feel like you’re just talking at your computer screen. If anything, it’s more like recording a podcast than giving a talk.

Luckily for me, I like recording podcasts. So I’m going to be doing a new online talk this year. It will be at An Event Apart’s Spring Summit which runs from April 19th to 21st. Tickets are available now.

I have a pretty good idea what I’m going to talk about. Web stuff, obviously, but maybe a big picture overview this time: the past, present, and future of the web.

Time to prepare a conference talk.

Design Principles For The Web

The opening presentation from An Event Apart Online Together: Front-End Focus held online in August 2020.

I’d like to take you back in time, just over 100 years ago, at the beginning of World War One. It’s 1914. The United States would take another few years to join, but the European powers were already at war in the trenches, as you can see here.

A drawing of early trench warfare with soldiers wearing winter coats and peaked caps.

What I want to draw your attention to is what they’re wearing, specifically what they’re wearing on their heads. This is the standard issue for soldiers at the beginning of World War One, a very fetching cloth cap. It looks great. Not very effective at stopping shrapnel from ripping through flesh and bone.

It wasn’t long before these cloth caps were replaced with metal helmets; much sturdier, much more efficient at protection. This is the image we really associate with World War One; soldiers wearing metal helmets fighting in the trenches.

A photograph of three allied soldiers sitting together for a meal in a trench wearing camouflage uniforms and metal helmets.

Now, an interesting thing happened after the introduction of these metal helmets. If you were to look at the records from the field hospitals, you would see that there was an increase in the number of patients being admitted with severe head injuries after the introduction of these metal helmets. That seems odd and the only conclusion that we could draw seems to be that cloth helmets are actually better than the metal helmets at stopping these kind of injuries. But that wouldn’t be correct.

You can see the same kind of data today. Any state where they introduce motorcycle helmet laws saying it’s mandatory to wear motorcycle helmets, you will see an increase in the number of emergency room admissions for severe head injuries for motorcyclists.

Now, in both cases, what’s missing is the complete data set because, yes, while in World War One there was an increase in the field hospital admissions for head injuries, there was a decrease in deaths. Just as today, if there’s an increase in emergency room admissions for severe head injuries because of motorcycle helmets, you will see a decrease in the number of people going to the morgue.

Analytics

I kind of like these stories of analytics where there’s a little twist in the tale where the obvious solution turns out not to be the correct answer and our expectations are somewhat subverted. My favorite example of analytics on the web comes from a little company called YouTube. This is from a few years back.

It was documented by an engineer at YouTube called Chris Zacharias. He blogged about this. He was really frustrated with the page weight on a YouTube video page, which at the time was 1.2 megabytes. That’s without the video. That’s the HTML, the CSS, the JavaScript, the images. This just seemed too big (and I would agree: it is too big).

Chris set about working on making a smaller version of a video page. He called this Project Feather. He worked and worked at it, and he managed to get a page down to just 98 kilobytes, so from 1.2 megabytes to 98 kilobytes. That’s an order of magnitude difference.

Then he set up shipping this to different segments of the audience and watching the analytics to see what rolled in. He was hoping to see a huge increase in the number of people engaging with the content. But here’s what he blogged.

The average aggregate page latency under Feather had actually increased. I had decreased the total page weight and number of requests to a tenth of what they were previously and somehow the numbers were showing that it was taking longer for videos to load on Feather, and this could not be possible. Digging through the numbers more (and after browser testing repeatedly), nothing made sense.

I was just about to give up on the project with my world view completely shattered when my colleague discovered the answer: geography. When we plotted the data geographically and compared it to our total numbers (broken out by region), there was a disproportionate increase in traffic from places like Southeast Asia, South America, Africa, and even remote regions of Siberia.

A further investigation revealed that, in those places, the average page load time under Feather was over two minutes. That means that a regular video page (at over a megabyte) was taking over 20 minutes to load.

Again, what was happening here was that there was a whole new set of data. There were people who literally couldn’t even load the page because it would take 20 minutes who couldn’t access YouTube who now, because of this Project Feather, for the first time were able to access YouTube. What that looked like, according to the analytics, was that page load time had overall gone up. What was missing was the full data set.

Expectations

I really like these stories that kind of play with our expectations. When the reveal comes, it’s almost like hearing the punchline to a joke, right? Your expectations are set up and then subverted.

Jeff Greenspan is a comedian who talks about this. He talks about expectations in terms of music and comedy. He points out that they both deal with expectations over time.

In music, the pleasure comes from your expectations being met. A song sets up a rhythm. When that rhythm is met, that’s pleasurable. A song is using a particular scale and when those notes on that scale are hit, it’s pleasurable. Music that’s not fun to listen to tends to be arhythmic and atonal where you can’t really get a handle on what’s going to come next.

Comedy works the other way where it sets up expectations and then pulls the rug out from under you — the surprise.

Now, you can use music and you can use comedy in your designs. If you were setting up a lovely grid and a vertical rhythm, that’s like music. It’s a lovely, predictable feeling to that. But you can also introduce a bit of comedy; something that peeks out from the grid. You upset (just occasionally) something with a bit of subverted expectations.

You don’t want something that’s all music. Maybe that’s a little boring. You don’t want something that’s all comedy because then it’s just crazy and hard to get a handle on.

You can see music and comedy in how you consume news. You notice that when you read your news sources, all it does is confirm what you already believe. You read something about someone, and you think, “Yes, they’ve done something bad and I always thought they were bad, so that has confirmed my expectations.” It’s like music.

I read something that somebody has done and I always thought they were a good person. This now confirms that they are a good person. That is music to my ears. If your news feels like that, feels like music, then you may be in a bubble.

The comedy approach to music would be more like the clickbait you see at the bottom of the Internet where it’s like, “Click here. You won’t believe what these child stars look like now.” The promise there is that we will subvert your expectations, and that’s where the pleasure will come.

Survivorship bias

My favorite story from history about analytics is not from World War One but from the sequel, World War Two, where again the United States were a few years late to this world war. But when they did arrive and started their bombing raids on Germany, they were coming from England. The bombers would come back all shot up, and so there was a whole thinktank dedicated to figuring out how we can reinforce these planes in certain areas.

You can’t reinforce the whole plane. That would make it too heavy, but you could apply some judicious use of metal reinforcement to protect the plane.

A scatterplot diagram of an airplane showing bullet holes concentrated in the middle of the fuselage, the wings, and the tail.

They treated this as a data problem, as an analytics problem. They looked at the planes coming back. They plotted where the bullet holes were, and that led them to conclude where they should put the reinforcements. You can see here that the wings were getting all shot up, the middle of the fuselage, so clearly that’s where the reinforcements should go.

There was a statistician, a mathematician named Abraham Wald. He looked at the exact same data and he said, “No, we need to reinforce the front of the plane where there are no bullet holes. We need to reinforce the back of the fuselage where there are no bullet holes.”

What he realized was that all the data they were seeing was actually a subset of the complete data set. They were only seeing the planes that made it back. What was missing were all the planes that got shot down. If all the planes that made it back didn’t have any bullet holes in the front of the plane, then you could probably conclude that if you get a bullet hole in the front of the plane, you’re not going to make it back. This became the canonical example of what we now call survivorship bias, which is this tendency to look at the subset of data — the winners.

You see survivorship bias all the time. You walk into a bookstore and you look at the business section and its books by successful business people; that’s survivorship bias. Really, the whole section should be ten times as big and feature ten times as many books written by people who had unsuccessful businesses because that would be a much more representative sample.

We see survivorship bias. You go onto Instagram and you look at people’s Instagram photos. Generally, they’re posting their best life, right? It’s the perfect selfie. It’s the perfect shot. It’s not a representative sample of what somebody’s life looks like. That’s survivorship bias.

Design systems

We have a tendency to do it on the web, too, when people publish their design systems. Don’t get me wrong. I love the fact that companies are making their design systems public. It’s something I’ve really lobbied for. I’ve encouraged people to do this. Please, if you have a design system, make it public so we can all learn from it.

I really appreciate that people do that, but they do tend to wait until it’s perfect. They tend to wait until they’ve got the success.

What we’re missing are all the stories of what didn’t work. We’re missing the bigger picture of the things they tried that just failed completely. I feel like we could learn so much from that. I feel like we can learn as much from anti-patterns as we can from patterns, if not more so.

Robin Rendle talked about this in a blog post recently about design systems. He said:

The ugly truth is that design systems work is not easy. What works for one company does not work for another. In most cases, copying the big tech company of the week will not make a design system better at all. Instead, we have to acknowledge how difficult our work is collectively. Then we have to do something that seems impossible today—we must publicly admit to our mistakes. To learn from our community, we must be honest with one another and talk bluntly about how we’ve screwed things up.

I completely agree. I think that would be wonderful if we shared more openly. I do try to encourage people to share their stories, successes, and failures.

I organized a conference a few years back all about design systems called Patterns Day and invited the best and brightest: Alla Kholmatova, Jina Anne, Paul Lloyd, Alice Bartlett – all these wonderful people. It was wonderful to hear people come up and sort of reassure you, “Hey, none of us have got this figured out. We’re all trying to figure out what we’re doing here.” The audience really needed to hear that. They really needed to hear that reassurance that this is hard.

Gaps and overlaps

I did Patterns Day again last year. My favorite talk at Patterns Day last year, I think, was probably from Danielle Huntrods. I’m biased here because I used to work with Danielle. She used to work at Clearleft, and she’s an absolutely brilliant front-end developer.

She had this lens that she used when she was talking about design systems and other things. She talked about gaps and overlaps, which is one of those things that’s lodged in my brain. I kind of see it everywhere.

She said that when you’re categorizing things, you’re putting things into categories, that means some things will fall between those categories. That leaves you with the gaps, the things that aren’t being covered. It’s almost like Donald Rumsfeld, the unknown unknowns and all that.

What can also happen when you put things into categories is you get these overlaps where there’s duplication; two things are responsible for the same task. This duplication of effort, of course, is what we’re trying to avoid with design systems. We’re trying to be efficient. We don’t want multiple versions of the same thing. We want to be able to reuse one component. There’s a danger there.

She’s saying what we do with the design system is we concentrate on cataloging these components. We do our interface inventory, but we miss the connective part. We miss the gaps between the components. Really, what makes something a system is not so much a collection of components but how those components fit together, those gaps between them.

Fluffy edges

Danielle went further. She didn’t just talk about gaps and overlaps in terms of design systems and components. She talked about it in terms of roles and responsibilities. If you have two people who believe they’re responsible for the same thing, that’s going to lead to a clash.

Worse, you’re working on a project and you find out that there was nobody responsible for doing something. It’s a gap. Everyone assumes that the other person was responsible for getting that thing done.

“Oh, you’re not doing that?” “I thought you were doing that.” “Oh, I thought you were doing that.”

This is the source of so much frustration in projects, either these gaps or these overlaps in roles and responsibilities. Whenever we start a project at Clearleft, we spend quite a bit of time getting this role mapping correct, trying to make sure there aren’t any gaps and there aren’t any overlaps. Really, it’s about surfacing those assumptions.

“Oh, I assumed I was responsible for that.” “No, no. I assumed I was the one who would be doing that.”

We clarify this stuff as early as possible in the design process. We even have a game we play called Fluffy Edges. It’s literally like a card game. We’d ask these questions, “Who is responsible for this? Who is going to do this?” It’s kind of good fun, but really it is about surfacing those assumptions and getting clarity on the roles at the beginning of the design process.

The design process

Now, the design process, I’m talking about the design process like it’s this known thing and it really isn’t. It’s a notoriously difficult thing to talk about the design process.

A squiggle that starts big and messy on the left but resolves into a straight line on the right.

Here’s one way of thinking about the design process. This is The Design Squiggle by Damien Newman. He used to be at IDEO. I actually think this is a pretty accurate representation of what the design process feels like for an individual designer. You go into the beginning and it’s chaos, it’s a mess, and it’s entropy. Then, over time, you begin to get a handle on things until you get to this almost inevitable result at the end.

I’m not sure it’s an accurate representation of what the collaborative design process feels like. There’s a different diagram that resonates a lot with us at Clearleft, which is the Double Diamond diagram from Chris Vanstone at the Design Council. The way of thinking about the Double Diamond is almost like it’s two design squiggles back-to-back.

Two back-to-back diamond shapes. The first diamond is labelled with the words discover and define. The second diamond is labelled with the words execute and deliver.

It’s a bit of an oversimplification, but the idea is that the design process is split into these triangles. First, it’s the discovery. Then we define. So we’re going out wide with discovery. Then we narrow it down with the definition. Then it’s time to build a thing and we open up wide again to figure out how we’re going to execute this thing. Once we got that figured out, we narrow down into the delivery phase.

The way of thinking about this is the first diamond (discovery and definition), that’s about building the right thing. Make sure you’re building the right thing first. The second diamond (about execution and delivery), that’s about building the thing right. Building the right thing and building the thing right.

The important thing is they follow this pattern of going wide and going narrow. This divergent phase with discovery and then convergent for definition. There’s a divergent phase for execution and then convergent for delivery.

If you take nothing else in the Double Diamond approach, it’s this way of making explicit when you’re in a divergent or convergent phase. Again, it’s kind of about servicing that assumption. “Oh, I assumed we were converging.” “No, no, no. We are diverging here.” That’s super, super useful.

I’ll give you an example. If you are in a meeting, at the beginning of the meeting, state whether it’s a divergent meeting or a convergent meeting. If you were in a meeting where the idea is to generate as many ideas as possible during a meeting, make that clear at the beginning because what you don’t want is somebody in the meeting who thinks the point is to converge on a solution.

You’ve got these people generating ideas and then there’s one person going, “No, that will never work. Here’s why. Oh, that’s technically impossible. Here’s why.” No, if you make it clear at the start, “There are no bad ideas. We’re in a divergent meeting,” everyone is on the same page.

Conversely, if it’s a convergent meeting, you need to make that clear and say, “The point of this meeting is that we come to a decision, one decision,” and you need to make that clear because what you don’t want in a convergent meeting is it’s ten minutes to launch time, converging on something, and then somebody in the meeting goes, “Hey, I just had an idea. How about if we…?” You don’t want that. You don’t want that.

If you take nothing else from this, this idea of making divergence and convergence explicit is really, really, really useful. Again, like I say, this pattern of just assumptions being surfaced is so useful.

This initial diamond of the Double Diamond phase, it’s where we spend a lot of our time at Clearleft. I think, early in the years of Clearleft, we spent more time on the second diamond. We were more about execution and delivery. Now, I feel like we deliver a lot more value in the discovery and definition phase of the design process.

There’s so much we do in this initial discovery phase. I mentioned already we have this fluffy edges game we play for role mapping to figure out the roles and responsibilities. We have things like a project canvas we use to collaborate with the clients to figure out the shape of what’s to come.

We sometimes run an exercise called a pre-mortem. I don’t know if you’ve ever done that. It’s like a post-mortem except you do it at the beginning of the project. It’s kind of a scenario planning.

You say, “Okay, it’s so many months after the launch and it’s been a complete disaster. What went wrong?” You map that out. You talk about it. Then once you’ve got that mapped out, you can then take steps to avoid that disaster happening.

Of course, what we do in the discovery phase, almost more than anything else, is research. You can’t go any further without doing the research.

Assumptions

All of these things, all of these exercises, these ways of working are about dealing with assumptions, either surfacing assumptions that we didn’t know were there or turning assumptions into hypotheses that can be tested. If you think about what an assumption is, it kind of goes back to expectations that I was talking about.

Assumptions are expectations plus internal biases. That gives you an assumption. The things that you don’t even realize you believe; they lead to assumptions. This can obviously be very bad. This is like you’ve got blind spots in your assumptions because of your own biases that you didn’t even realize you had.

They’re not necessarily bad things. Assumptions aren’t necessarily bad. If you think about your expectations plus your biases, that’s another way of thinking about your values. What do you hold to be really dear to you? The things that are self-evident to you, those are your values, your internal expectations and biases.

Values

Now, at Clearleft, we have our company values, our core values, the things we believe. I am not going to share the Clearleft values with you. There are two reasons for that.

One is that they’re Clearleft’s values. They are useful for us. That’s for us to know internally.

Secondly, there’s nothing more boring than a company sharing their values with you. I say nothing more boring. Maybe the only thing more boring than a company sharing their values is when a so-called friend tells you about a dream they had and you have to sit there and smile and nod politely while they tell you about something that is only of interest to them.

Purpose

These values are essentially what give you purpose, whether it’s at an individual level, your personal moral values give you your purpose, or at a company or organization level, you get your purpose – or any endeavor. You think about the founding of a nation-state like the United States of America. You got the Declaration of Independence. That encodes the values. That has the purpose. It’s literally saying, “We hold these truths to be self-evident.” These are assumptions here. That’s your purpose is something like the Declaration of Independence.

Principles

Then you get the principles, how you’re going to act. The Constitution would be an example of a collection of principles. These principles must be influenced by the purpose. Your values must influence the principles you’re going to use to act in the world.

Patterns

Then those principles have an effect on the final patterns, the outputs that you’ll see. In the case of a nation-state like America, I would say the patterns are the laws that you end up with. Those laws come from the principles encoded in the Constitution. The Constitution, those principles in the Constitution are influenced and encoded from the purpose in the Declaration of Independence.

The purpose influences the principles. The principles influence the pattern. This would be true in the case of software as well. You think about the patterns are the final interface elements, the user interface. Those are the patterns. Those have been influenced by the principles of that company, how they choose to act, and those principles are influenced by the purpose of that company and what they believe.

Design principles

This is why I find principles, in particular, to be fascinating because they sit in the middle. They are influenced by the purpose and they, in turn, influence the patterns. I’m talking about design principles, something I’m really into. I’m so into design principles, I actually have a website dedicated to design principles at principles.adactio.com.

Now, all I do on this website is collect design principles. I don’t pass judgment. I don’t say whether I think they’re good design principles or bad design principles. I just document them. That’s turned out to be a good thing to do over time because sometimes design principles disappear, go away, or get changed. I’ve got a record of design principles from the past.

For example, Google used to have a set of principles called Ten Things We Know to Be True — we know to be true, right? We hold these truths to be self-evident. That’s no longer available on the Google website, those ten things, those ten principles. One of them was, “You can make money without doing evil.” Like I said, that’s gone now. That’s not available on the Google website.

There was another set of design principles from Google that’s also not available anymore. That was called Ten Principles That Contribute to a Googley User Experience. I think we understand why those are no longer available. The sheer embarrassment of saying the word Googley out loud, I think.

I’ll tell you something I notice when I see design principles. Like I say, I catalog them without judgment, but I do have ideas. I think about what makes for good or bad design principles or sets of design principles.

Whenever I see somebody with a list that’s exactly ten principles, I’m suspicious. Like, “Really? That’s such a convenient round number. You didn’t have nine principles that contribute to a Googley user experience? You didn’t have 11 things that we know to be true? It happened to be exactly ten?” It feels almost like a bad code smell to me that it’s exactly ten principles.

Even some great design principles like Dieter Rams, the brilliant designer. He has a fantastic set of design principles called Ten Principles for Good Design. But even there I have to think, “Hmm. That’s a bit convenient, isn’t it, that it’s exactly ten principles for good design? Isn’t it, Dieter?”

Now, just in case you think I’m being blasphemous by sugging that Dieter Rams’ Ten Principles for Good Design is not a good set of design principles, I am not being blasphemous. I would be blasphemous if I pointed out that in the Old Testament, God supposedly delivers 10 commandments, not 9, not 11, exactly 10 commandments. Really, Moses, ten?

Anyway, what I’m talking about here is, like I say, almost like these code smells for design principles. Can we evaluate design principles? Are there heuristics for saying whether a design principle is a good design principle or a bad design principle?

Universal principles

To get meta about this, what I’m talking about is, are there design principles for design principles? I kind of think there are. I think you can evaluate design principles and say that’s a good one or that’s a bad one. You can evaluate them by how useful they are.

Let’s take an example. Let’s say you’ve got a design principle like this:

Make it usable.

That’s a design principle. I think this is a bad design principle. It’s not because I don’t agree with it. It’s actually a bad design principle because I agree with it and everyone agrees with it. It’s so agreeable that it’s hard to argue with and that’s not what a design principle is for.

Design principles aren’t these things to go, “Rah-rah! Yes! I feel good about this.” They are there to kind of surface stuff and have discussions, have disagreements – get it out in the open. Let’s say we took this design principle, “make it usable”, and it was rephrased to something more contentious. Let’s say somebody had the design principle like:

Usability is more important than profitability.

Ooh! Now we’re talking.

See, I think this is a good design principle. I’m not saying I agree with it. I’m saying it’s a good design principle because what it has now is priority.

We’re saying something is valued more than something else and that’s what you want from design principles is to figure out what the priorities of this organization are. What do they value? How are they going to behave?

I think this is a great phrasing for design principles. If you can phrase a design principle like this:

___, even over ___

Then that’s really going to make it clear what your values are. You can phrase a design principle as:

Usability, even over profitability.

That’s good.

Now you can have that discussion early on about whether everyone is on board with that. If there’s disagreement, you need to hammer that out and figure it out early on in the process.

Here’s another thing about this phrasing that I really like, “blank, even over blank.” It passes another test of a good design principle, which is reversibility. Rather than being a universal thing, a design principle should be reversible for a different organization.

One organization might have a design principle that says “usability, even over profitability,” and another organization, you can equally imagine having a design principle that says, “profitability, even over usability.” The fact that this principle is reversible like that is a good thing. That shows that it’s an effective design principle because it’s about priorities.

My favorite design principle of all—because I’m such a nerd for design principles, I do have a favorite—is from the HTML design principles. It’s called The Priority of Constituencies. It states:

In case of conflict, consider users over authors over implementors over theoretical purity.

That’s so good.

First of all, it just starts with, “In case of conflict.” Yes! That is exactly what design principles are for. Again, they’re not there to be like, “Rah-rah! Feel-good design principles.” No, they are there to sort out conflict.

Then, “consider users over authors.” That’s like:

Users, even over authors. Authors, even over implementors. Implementors even over theoretical purity.

Really good stuff.

There are, I think, design principles for design principles, these kind of smell tests that you can run your design principles past and see if they pass or fail.

I talked about how design principles are unique to the organization. The reversibility test kind of helps with that. You can imagine a different organization that has the complete opposite design principles to you.

Eponymous laws

I do wonder: are there some design principles that are truly universal? Well, there’s kind of a whole category of principles that we treat as universal truths. That’s kind of these laws. They tend to be the eponymous laws. They’re usually named after a person and there’s some kind of universal truth. There are a lot of them out there.

Hofstadter’s law

Hofstadter’s law, that’s from Douglas Hofstadter. Hofstadter’s law states:

It always takes longer than you expect, even when you take into account Hofstadter’s law.

That does sound like a universal truth and certainly, my experience matches that. Yeah, I would say Hofstadter’s law feels like a universal design principle.

Sturgeon’s law

90% of everything is crap.

Theodore Sturgeon was a science fiction writer and people would poo-poo science fiction and point out that it was crap. He would say, “Yeah, but 90% of science fiction is crap because 90% of everything is crap.” That became Sturgeon’s law.

Yeah, you look at movies, books, and music. It’s hard to argue with Sturgeon’s law. Yeah, 90% of everything is crap. That feels like a universal law.

Murphy’s law

Here’s one we’ve probably all heard of. Murphy’s law:

Anything that can go wrong will go wrong.

It tends to get treated as this funny thing but, actually, it’s a genuinely useful design principle and one we could use on the web a lot more.

Cole’s law

There’s Cole and Cole’s law. You’ve probably heard of that. That’s:

Shredded raw cabbage with a vinaigrette or mayonnaise dressing.

Cole’s law.

Moving swiftly on, there’s another sort of category of these laws, these universal principles that have a different phrasing, and it’s this idea of a razor. Here it’s being explicit about in case of conflict. Here it’s being explicit saying when you try to choose between two choices, which to choose.

Hanlon’s razor

Hanlon’s razor is a famous example that states:

Never attribute to malice that which can be adequately explained by incompetence.

If you’re trying to find a reason for something, don’t go straight to assuming malice. Incompetence tends to be a greater force in the world than malice.

I think it’s generally true, although, there’s also a law by Arthur C. Clarke, Clarke’s third law, which states that, “Any sufficiently advanced technology is indistinguishable from magic.” If you take Clarke’s third law and you mash it up with Hanlon’s razor, then the result is that any sufficiently advanced incompetence is indistinguishable from malice.

Occam’s razor

Another razor that we hear about a lot is Occam’s razor. This is very old. It goes back to William of Occam. Sometimes it’s misrepresented as being the most obvious solution is the correct solution. We know that that’s not true because we saw in the stories of metal helmets in World War One and motorcycle helmets or the bombers in World War Two or the YouTube videos that it’s not about the most obvious solution.

What Occam’s razor actually states is:

Entities should not be multiplied without necessity.

In other words, if you’re coming up with an explanation for something and your explanation requires that you now have to explain even more things—you’re multiplying the things that need to be explained—it’s probably not the true thing.

If your explanation for something is “aliens did it,” well, now you’ve got to explain the existence of aliens and explain how they got here and all this. You’re multiplying the entities. Most conspiracy theories fail the test of Occam’s razor because they unnecessarily multiply entities.

World Wide Web

These design principles that we can borrow, we’ve got these universal ones we can borrow. I also think maybe we can borrow from specific projects and see things that would apply to us. Certainly, when we’re working on the World Wide Web and we’re building things on the World Wide Web, we could look at the design principles that informed the World Wide Web when it was being built by Tim Berners-Lee, who created the World Wide Web, and Robert Cailliau, who worked with him.

The World Wide Web started at CERN and started life in 1989 as just a proposal. Tim Berners-Lee wrote this really quite boring memo called “Information Management: A Proposal” with indecipherable diagrams on it. This is March 12, 1989. His supervisor Mike Sendall, he later saw this proposal and must have seen the possibility here because he scrawled across the top:

Vague but exciting.

Tim Berners-Lee did get the go-ahead to work on this project, this World Wide Web project, and he created the first web browser. He created the first web server. He created HTML.

I was in the neighbourhood so I just had to come by and say hello…

You can see the world’s first web server in the Science Museum in London. It’s this NeXTcube. NeXT was the company that Steve Jobs formed after leaving Apple.

I have a real soft spot for this machine because I was very lucky to be invited to CERN last year to take part in this project where we were trying to recreate the experience of using that first web browser that Tim Berners-Lee created on that NeXT machine. You can go to this website worldwideweb.cern.ch and you can see what it feels like to use this web browser. You can use a modern browser with this emulation inside of it. It’s really good fun.

My colleagues were spending their time actually doing the hard work. I spent most of my time working on the website about the project. I built this timeline because I was fascinated about what was influencing Tim Berners-Lee.

Timeline

It’s kind of easy to look at the 30 years of the web, but I thought it would be more interesting to also look back at the 30 years before the web and see what influenced Tim Berners-Lee when it came to networks, hypertext, and format. Were there design principles that he adhered to?

We don’t have to look far because Tim Berners-Lee himself has published design principles (that he formulated or borrowed from elsewhere) in a document called Axioms of web Architecture. I think he first published this in 1998. These are really useful things that we can take and we can apply when we’re building on the web.

Particularly, now I’m talking about the second diamond of the Double Diamond. When we are choosing how we’re going to execute something or how we’re going to deliver it, building the thing right, that’s when these design principles come in handy.

He was borrowing; Tim Berners-Lee was borrowing from things that had come before, existing creations that the web is built on top of like the Internet and computing. He said:

Principles such as simplicity and modularity are the stuff of software engineering.

So he borrowed those principles about simplicity and modularity.

He also said:

Decentralization and tolerance are the life and breath of the Internet.

Those principles, tolerance and decentralization, they’d proven themselves to work on the Internet. The web is built on top of the Internet. So, it makes sense to carry those principles forward on the World Wide Web.

Robustness

That principle of tolerance, in particular, is something I think you really see on the web. It comes from the principles underlying the Internet. In particular, this person, Jon Postel, who is responsible for maintaining the Domain Name System, DNS, he has an eponymous law named after him. It’s also called the Robustness Principle or Postel’s law. This law states:

Be conservative in what you send. Be liberal in what you accept.”

Now, he was talking about packet switching on the Internet that if you’re going to send a packet over the Internet, try to make it as well-formed as possible. But on the other hand, when you receive a packet and if it’s got errors or something, try and deal with it. Be liberal in what you can accept.

I see this at work all the time on the web, not just in terms of technical things but in terms of UX and usability. The example I always use is if you’re going to make a form on the web, be conservative in what you send. Send as few form fields as possible down the wire to the end-user. But then when the user is filling out that form, be liberal in what you accept. Don’t make them formulate their telephone number or credit card in a certain format. Be liberal in what you accept.

Be conservative in what you send when it comes from front-end development. This matters. Literally, just in terms of what we’re sending down the wire to the end-user, we should be more conservative in what we send. We don’t think about this enough, just the weight, the sheer weight of things we’re sending.

I was doing some consulting with a client and we did a kind of top four of where the weight was coming from. I think this applies to websites in general.

4: Web fonts

Coming in at number four, we had web fonts. They can get quite weighty, but we have ways of dealing with this now. We’ve got font display in CSS. We can subset our web fonts. Variable fonts can be a way of reducing the size of fonts. So, there are solutions to this. There are ways of handling it.

3: Images

At number three, images. Images do account for a lot of the sheer weight of the web. But again, we have solutions here. We’ve got responsive images with source set and picture. Using the right format, right? Not using a PNG if you should be using a JPEG, using WebP, using SVGs where possible. We can deal with this. There are solutions out there, as long as we’re aware of it.

2: Your JavaScript

At number two, your JavaScript, the JavaScript that you send down the wire that you’ve written to the client. It’s gotten kind of out of hand. Libraries in your code, it’s gotten very, very weighty. This is bad, but not as bad as number one, which is other people’s JavaScript, third-party JavaScript.

1: Other people’s JavaScript

“Oh, the marketing department just wanted to add that one line, that one script that then pulls in another script that pulls in three more scripts.” Before you know it, it’s out of hand. Third-party JavaScript is really tough to deal with because so often it’s out of our hands. It’s like we don’t have control over that.

JavaScript

JavaScript is particularly troublesome because, with all the other things—images, web fonts—yeah, we’re talking about weight. It’s the file size is the issue. That’s only part of the issue with JavaScript. Yes, we are sending too much JavaScript, but it also is expensive in terms of the end-user has to not just download that JavaScript, but parse the JavaScript, execute that JavaScript. It’s particularly expensive compared to CSS, HTML, images, or fonts.

It is eating the world. We heard that software is eating the world. I’d say JavaScript is eating the world. There’s another eponymous law from Jeff Atwood. Atwood’s law states that:

Any application that can be written in JavaScript will eventually be written in JavaScript.

We’re seeing that now.

Back in my day, we used to joke about, “Well, you could never build a Photoshop in a web browser.” Now, everything is migrating to being written in JavaScript, which is kind of amazing and speaks to the power of JavaScript. It’s fantastic in one way, but it does feel like we’re using JavaScript to do everything, including things that could be done with other languages.

When it comes to choosing a language, there’s a fantastic design principle that Tim Berners-Lee used when he was designing the World Wide Web. It’s the principle of least power. The principle of least power states:

Choose the least powerful language suitable for a given purpose.

That sounds very counterintuitive. Why would you want to choose the least powerful language? Well, in a way, it’s about keeping things simple. There’s another design principle, “Keep it simple, stupid.” KISS.

It’s kind of related to Occam’s razor, not multiplying entities unnecessarily. Choose the simplest language. The simplest language is likely to be more universal and, because it’s simpler, it might not be as powerful but it’ll generally be more robust.

I’ll give you an example. I’ll quote from Derek Featherstone. He said:

In the web front-end stack—HTML, CSS, JavaScript, and ARIA—if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

He’s absolutely right. This is about robustness here. It’s less fragile.

The classic example with ARIA: the best ARIA attribute is no ARIA attribute. Rather than having div role="button", just use a button. If you can do something in CSS rather than JavaScript, do it in CSS. Choose the least powerful language.

Instead, we’re using JavaScript to send our content down the wire. That could be done in HTML. We’re using CSS in JS now, right? We’re using the most powerful language, JavaScript, to do everything, which kind of violates the principle of least power. There’s a set of design principles from the Government Digital Services here in the U.K. and they’re really good design principles. One of them stuck out to me. The design principle itself says:

Do less.

By way of explanation, they say:

Government should only do what only government can do.

Government shouldn’t try to be all things to all people. Government should do the things where private enterprise can’t do these things. The government has to do these things. The government should only do what only government can do.

I thought that this could be extrapolated out and made into a more universal design principle. You could say:

Any particular technology should only do what only that particular technology can do

If that’s too abstract, let’s formulate it into this design principle:

JavaScript should only do what only JavaScript can do.

We can call this Keith’s Law or Keith’s Razor or something. I think it’s a good principle.

I remember the early uses of JavaScript for things like image rollovers and form validation. Now, I wouldn’t use JavaScript for image rollovers or hover effects. I’d use CSS. I wouldn’t use JavaScript for form validation if I can just use required attributes or input type="email". Apply the principle of least power.

Components

Let’s see whether we’re applying the principle of least power on the web. Take an example. Let’s say you’ve got a component that’s a button component. How are you going to go about building this? You could have bare minimal HTML, just a div or a span. You’ve got some CSS to make it look good. Sure. You apply all the JavaScript and ARIA that you need to make it work like a button.

Or, alternatively, you could use a button and you style it however you want using CSS.

Now, in this example, this particular component, I would say it’s a no-brainer. You go with the native button element. Don’t make your own button component with a div and JavaScript and ARIA. Use a native button element.

Okay. That seems pretty straightforward and that is a perfect example of the principle of least power. Choose the least powerful language suitable for the purpose.

But then what if you’ve got a drop-down component, selecting an option from a list of options? Well, you could build this using bare minimum HTML. Again, divs, maybe. You style it however you want it to look and you give it that opening and closing functionality. You give it accessibility using ARIA. Now you’ve got to think about making sure it works with a keyboard — all that stuff, all the edge cases.

Or you just use a select element — job done. You style it with CSS …Ah, well, yes, you can style it to a certain degree with CSS, but if you ever try to style the open state of a select element, you’re going to have a hard time.

Now, this is where it gets interesting. What do you care about more? Can you live with that open state not being styled exactly the way you might want it to be styled? If so, yes, choose the least powerful technology. Go with select. But I can kind of start to see why somebody would maybe roll their own in that case.

Or take this example: a date picker component. Again, you could have bare minimum HTML. Style it how you want. Write it all yourself using JavaScript. Make it accessible using ARIA. Or just use the native HTML input type="date" …and then have fun trying to style that in CSS. You won’t be able to do much, to be honest.

Do you still pick the least powerful technology here?

This would be kind of the under-engineered approach: to just use the native HTML approach: input type="date", select, button.

The over-engineered approach is to go with doing it all yourself: write JavaScript to make it go that way.

It feels like there’s this pendulum swing between the over-engineered versus the under-engineered. Like I say, what it comes down is, what do you prioritize?

Universality

What you get with the native approach is you get access. You get that universality by using the least powerful language. There’s more universal support.

What you get by rolling your own is you get much more control. You’re going from the spectrum of least power to most power and that’s also a spectrum going from most available (widest access) to least available but with more control.

You have to decide where your priorities lie. This is where I think, again, we can look at the web and we can take principles from the web.

Eric has something he said recently that really resonates with me. He said:

The web does not value consistency. The web values ubiquity.

That’s the purpose of the web. It’s the universal access. That’s the value encoded into it.

To put this in another way, we could formulate it as:

Ubiquity, even over consistency.

That’s the design principle of the web.

This passes the reversibility test. We can picture other projects that would say:

Consistency, even over ubiquity.

Native apps value consistency, even over ubiquity. iOS apps are very consistent on iOS devices, but just don’t work at all on Android devices. They’re consistent; they’re not ubiquitous.

We saw this in action with Flash and the web. Flash valued consistency, but you had to have the Flash plugin installed, so it was not ubiquitous. It was not universal.

The World Wide Web is about ubiquity, even over consistency. I think we should remember that.

When we look here in the world’s first-ever web browser, we are looking at the world’s first-ever webpage, which is still available at its original URL. That’s incredibly robust.

What’s amazing is you can not only look at the world’s first webpage in the world’s first web browser, you can look at the world’s first webpage in a modern web browser and it still works, which is kind of amazing. If you took a word processing document from 30 years ago and tried to open it in a modern word processing document, good luck. It just doesn’t work that way. But the web values this ubiquity over consistency.

Let’s apply those principles, apply the principle of least power, apply the robustness principle. Value ubiquity even over consistency. Value universal access over control. That way, you can make products and services that aren’t just on the web, but of the web.

Thank you.

Sunday, January 3rd, 2021

My stack will outlive yours

My stack requires no maintenance, has perfect Lighthouse scores, will never have any security vulnerability, is based on open standards, is portable, has an instant dev loop, has no build step and… will outlive any other stack.

Saturday, January 2nd, 2021

Loading and replacing HTML parts with HTML

I like this proposal for a declarative Ajax pattern. It’s relatively straightforward to polyfill, although backward-compatibility is an issue because of existing browser behaviour with the target attribute.

Thursday, December 31st, 2020

Books I read in 2020

I only read twenty books this year. Considering the ample amount of free time I had, that’s not great. But I’m not going to beat myself up about it. Yes, I may have spent more time watching television than reading, but I’m cutting myself some slack. It was 2020, for crying out loud.

Anyway, here’s my annual round-up with reviews. Anything with three stars is good. Four stars is really good. Five stars is practically unheard of. As usual, I tried to get an equal balance of fiction and non-fiction.

Raven Stratagem by Yoon Ha Lee

★★★☆☆

An enjoyable sequal to Ninefox Gambit. There are some convoluted politics but that all seems positively straightforward after the brain-bending calendrical warfare introduced in the first book.

The Human Use Of Human Beings: Cybernetics And Society by Norbert Wiener

★★★☆☆

The ur-text on systems and feedback. Reading it now is like reading a historical artifact but many of the ideas are timeless. It’s a bit dense in parts and it tries to cover life, the universe and everything, but when you remember that it was written in 1950, it’s clearly visionary.

The Word For World Is Forest by Ursula K. Le Guin

★★★☆☆

Simultaneously a ripping yarn and a spiritual meditation. It’s Vietnam and the environmental movement rolled into one (like what Avatar attempted, but this actually works).

Abolish Silicon Valley by Wendy Liu

★★★★☆

Here’s my full review.

A Short History Of Irish Traditional Music by Gearóid Ó hAllmhuráin

★★☆☆☆

A perfectly fine and accurate history of the music, but it’s a bit like reading Wikipedia. Still, it was quite the ego boost to see The Session listed in the appendix.

Machines Like Me by Ian McEwan

★★★☆☆

McEwan’s first foray into science fiction is a good tale but a little clumsily told. It’s like he really wants to show how much research he put into his alternative history. There are moments when characters practically turn to the camera to say, “Imagine how the world would’ve turned out if…” It’s far from McEwan’s best but even when he’s not on top form, his writing is damn good.

The Fabric Of Reality by David Deutsch

★★★☆☆

I’ve attempted to read this before. I may have even read it all before and had everything just leak out of my head. The problem is with me, not David Deutsch who does a fine job of making complex ideas approachable. This is like a unified theory of everything.

Helliconia Winter by Brian Aldiss

★★★☆☆

The third and final part of Aldiss’s epic is just as enjoyable as the previous two. The characters aren’t the main attraction here. It’s all about the planetary ballet.

Uncanny Valley by Anna Wiener

★★★★☆

A terrific memoir. It’s open and honest, and just snarky enough when it needs to be.

A Wizard Of Earthsea, The Tombs Of Atuan, and The Farthest Shore by Ursula K. Le Guin

★★★★☆

There’s a real pleasure in finally reading books that you should’ve read years ago. I can only imagine how wonderful it would’ve been to read these as a teenager. It’s an immersive world but there’s something melancholy about the writing that makes the experience of reading less escapist and more haunting.

Superior: The Return of Race Science by Angela Saini

★★★★★

Absolutely superb! I liked Angela Saini’s previous book, Inferior, but I loved this. It’s a harrowing read at times, but written with incredible clarity and empathy. I can’t recommend this highly enough.

Purple People by Kate Bulpitt

★★★★☆

Full disclosure: Kate is a friend of mine, so I probably can’t evaluate her book in a disinterested way. That said, I enjoyed the heck out of this and I think you will too. It’s very hard to classify and I think that’s what makes it so enjoyable. Technically, it’s sci-fi I suppose—an alternative history tale, probably—but it doesn’t feel like it. It’s all about the characters, and they’re all vividly realised. Honestly, I’m not sure how best to describe it—other then it being like the inside of Kate’s head—but the description of it being “a jolly dystopia” comes close. Take a chance and give it a go.

How to Argue With a Racist: History, Science, Race and Reality by Adam Rutherford

★★★☆☆

Good stuff from Adam Rutherford, though not his best. If I hadn’t already read Angela Saini’s Superior I might’ve rated this higher, but it pales somewhat by comparison. Still, it was interesting to see the same subject matter tackled in two different ways.

Agency by William Gibson

★★☆☆☆

There’s nothing particularly wrong with Agency, but there’s nothing particularly great about it either. It’s just there. Maybe I’m being overly harsh because the first book, The Peripheral, was absolutely brilliant. This reminded me of reading Gibson’s Spook Country, which left me equally unimpressed. That book was sandwiched between the brilliant Pattern Recognition and the equally brilliant Zero History. That bodes well for the forthcoming third book in this series. This second book just feels like filler.

Last Night’s Fun: In And Out Of Time With Irish Music by Ciaran Carson

★★★☆☆

It’s hard to describe this book. Memoir? Meditation? Blog? I kind of like that about it, but I can see how it divides opinion. Some people love it. Some people hate it. I thought it was enjoyable enough. But it doesn’t matter what I think. This book is doing its own thing.

Revenant Gun by Yoon Ha Lee

★★★☆☆

The third book in the Machineries of Empire series has much less befuddlement. It’s even downright humourous in places. If you liked Ninefox Gambit and Raven Strategem, you’ll enjoy this too.

A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster by Rebecca Solnit

★★★☆☆

The central thesis of this book is refuting the Hobbesian view of humanity as being one crisis away from breakdown. I feel like that argument was made more strongly in Critical Mass: How One Thing Leads to Another by Philip Ball. But where this book shines is in its vivid description of past catastrophes and their aftermaths: the San Francisco fire; the Halifax explosion; the Mexico City earthquake; and the culmination with Katrina hitting New Orleans. I was less keen on the more blog-like personal musings but overall, this is well worth reading.

Blindsight by Peter Watts

★★☆☆☆

I like a good tale of first contact, and I had heard that this one had a good twist on the Fermi paradox. But it felt a bit like a short story stretched to the length of a novel. It would make for a good Twilight Zone episode but it didn’t sustain my interest.

This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone

I’m still reading this Hugo-winning novella and enjoying it so far.


Alright, time to wrap up this look back at the books I read in 2020 and pick my favourites: one fiction and one non-fiction.

My favourite non-fiction book of the year was easily Superior by Angela Saini. Read it. It’s superb.

What about fiction? Hmm …this is tricky.

You know what? I’m going to go for Purple People by Kate Bulpitt. Yes, she’s a friend (“it’s a fix!”) but it genuinely made an impression on me: it was an enjoyable romp while I was reading it, and it stayed with me afterwards too.

Head on over to Bookshop and pick up a copy.

Wednesday, December 23rd, 2020

HTML Over The Wire | Hotwire

This is great! The folks at Basecamp are releasing the front-end frameworks they use to build Hey. There’s Turbo—the successor to Turbolinks:

It offers a simpler alternative to the prevailing client-side frameworks which put all the logic in the front-end and confine the server side of your app to being little more than a JSON API.

With Turbo, you let the server deliver HTML directly, which means all the logic for checking permissions, interacting directly with your domain model, and everything else that goes into programming an application can happen more or less exclusively within your favorite programming language. You’re no longer mirroring logic on both sides of a JSON divide. All the logic lives on the server, and the browser deals just with the final HTML.

Yes, this is basically Hijax (which is itself simply a name for progressive enhancement applied to Ajax) and I’m totally fine with that. I don’t care what it’s called when the end result is faster, more resilient websites.

Compare and contrast the simplicity of the Hotwire/Turbo approach to the knots that React is tying itself up in to try to get the same performance benefits.

Tuesday, December 22nd, 2020

Ignore AMP · Jens Oliver Meiert

It started using the magic spell of prominent results page display to get authors to use it. Nothing is left of the original lure of raising awareness for web performance, and nothing convincing is there to confirm it was, indeed, a usable “web component framework.”