Readability Guidelines
Imagine a collaboratively developed, universal content style guide, based on usability evidence.
Imagine a collaboratively developed, universal content style guide, based on usability evidence.
A hall of shame for ludicrously convoluted password rules that actually reduce security.
A very astute framing by Ted Chiang—large language models as a form of lossy compression for text.
When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
A lot of uses have been proposed for large language models. Thinking about them as blurry JPEGs offers a way to evaluate what they might or might not be well suited for.
I posted 1057 times on adactio in 2022.
That’s a bit more than in 2021.
November was the busiest month with 137 posts.
February was the quietest with 65 posts.
That included about 237 notes with photos and 214 replies.
I published one article, the transcript of my talk, In And Out Of Style.
I watched an awful lot of television but managed to read 25 books.
Elsewhere, I huffduffed 130 audio files and added 55 tune settings on The Session in 2022.
I spoke at ten events.
I travelled within Europe and the USA to a total of 18 destinations.
Here’s a highlight reel of some of my blog posts from 2022:
I also published the transcript of my conference talk, In And Out Of Style, a journey through the history of CSS.
This resonates with me.
This piece by Giles is a spot-on description of what I do in my role as content buddy at Clearleft. Especially this bit:
Your editor will explain why things need changing
As a writer, it’s really helpful to understand the why of each edit. It’s easier to re-write if you know precisely what the problem is. And often, it’s less bruising to the ego. It’s not that you’re a bad writer, but just that one particular thing could be expressed more simply, or more clearly, than your first effort.
On Tyranny by Timothy Snyder is a very short book. Most of the time, this is a feature, not a bug.
There are plenty of non-fiction books I’ve read that definitely could’ve been much, much shorter. Books that have a good sensible idea, but one that could’ve been written on the back of a napkin instead of being expanded into an arbitrarily long form.
In the world of fiction, there’s the short story. I guess the equivelent in the non-fiction world is the essay. But On Tyranny isn’t an essay. It’s got chapters. They’re just really, really short.
Sometimes that brevity means that nuance goes out the window. What might’ve been a subtle argument that required paragraphs of pros and cons in another book gets reduced to a single sentence here. Mostly that’s okay.
The premise of the book is that Trump’s America is comparable to Europe in the 1930s:
We are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism. Our one advantage is that we might learn from their experience.
But in making the comparison, Synder goes all in. There’s very little accounting for the differences between the world of the early 20th century and the world of the early 21st century.
This becomes really apparent when it comes to technology. One piece of advice offered is:
Make an effort to separate yourself from the internet. Read books.
Wait. He’s not actually saying that words on screens are in some way inherently worse than words on paper, is he? Surely that’s just the nuance getting lost in the brevity, right?
Alas, no:
Staring at screens is perhaps unavoidable but the two-dimensional world makes little sense unless we can draw upon a mental armory that we have developed somewhere else. … So get screens out of your room and surround yourself with books.
I mean, I’m all for reading books. But books are about what’s in them, not what they’re made of. To value words on a page more than the same words on a screen is like judging a book by its cover; its judging a book by its atoms.
For a book that’s about defending liberty and progress, On Tyranny is puzzingly conservative at times.
Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.
Do “cloud” next!
I posted to adactio.com 968 times in 2021.
That’s considerably less than 2020 or 2019. Not sure why.
March was the busiest month with 118 posts.
I published:
Those notes include 170 photos and 162 replies.
Elsewhere in 2021 I published two seasons of the Clearleft podcast (12 episodes), and I wrote the 15 modules that comprise a course on responsive design on web.dev.
Most of my speaking engagements in 2021 were online though I did manage a little bit of travel in between COVID waves.
My travel map for the year includes one transatlantic trip: Christmas in Arizona, where I’m writing this end-of-year wrap-up before getting back on a plane to England tomorrow, Omicron willing.
The French have a wonderful phrase, lesprit de l’escalier. It describes that feeling when you’ve stormed out of the room after an argument and you’re already halfway down the stairs when you think of the perfect quip that you wish you had said.
I had a similar feeling last week but instead of wishing I had said something, I was wishing I had kept my mouth shut.
I have an annoying tendency to want to get the last word in. I don’t have a problem coming up with a barbed quip. My problem is wishing I could take them back.
This happened while I was hosting the conference portion of UX Fest last week. On the hand, I don’t want the discussions to be dull so I try to come up with thought-provoking points to bring up. But take that too far and it gets ugly. There’s a fine line between asking probing questions and just being mean (I’m reminded of headline in The Onion, “Devil’s Advocate Turns Out To Be Just An Asshole”).
Towards the end of the conference, there was a really good robust discussion underway. But I couldn’t resist getting in the last word. In the attempt to make myself look clever I ended up saying something hurtful and clumsy.
Fucking idiot.
I apologised, and it all worked out well in the end, but damn if I haven’t spent the last week on the staircase wishing I could turn back time and say …nothing.
A personal website ain’t got no wrong words.
This old article from Chris is evergreen. There’s been some recent discussion of calling these words “downplayers”, which I kind of like. Whatever they are, try not to use them in documentation.
I work with words. Sometimes they’re my words. Sometimes they’re words that my colleagues have written:
One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.
I also work with web technologies, usually front-of-the-front-end stuff. HTML, CSS, and JavaScript. The technologies that users experience directly in web browsers.
I think a lot about design principles for the web. The two principles I keep coming back to are the robustness principle and the principle of least power.
When it comes to words, the guide that I return to again and again is George Orwell, specifically his short essay, Politics and the English Language.
Towards the end, he offers some rules for writing.
- Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
- Never use a long word where a short one will do.
- If it is possible to cut a word out, always cut it out.
- Never use the passive where you can use the active.
- Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
- Break any of these rules sooner than say anything outright barbarous.
These look a lot like design principles. Not only that, but some of them look like specific design principles. Take the robustness principle:
Be conservative in what you send, be liberal in what you accept.
That first part applies to Orwell’s third rule:
If it is possible to cut a word out, always cut it out.
Be conservative in what words you send.
Then there’s the principle of least power:
Choose the least powerful language suitable for a given purpose.
Compare that to Orwell’s second rule:
Never use a long word where a short one will do.
That could be rephrased as:
Choose the shortest word suitable for a given purpose.
Or, going in the other direction, the principle of least power could be rephrased in Orwell’s terms as:
Never use a powerful language where a simple language will do.
Oh, I like that! I like that a lot.
A slot machine for speculation. Enter a topic and get a near-future scenario on that topic generated automatically.
One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.
Sometimes a colleague will send a link to a Google Doc where they’ve written an article. I can then go through it and suggest changes. Using the “suggest” mode rather than the “edit” mode in Google Docs means that they can accept or reject each suggestion later.
But what works better—and is far more fun—is if we arrange to have a video call while we both have the Google Doc open in our browsers. That way, instead of just getting the suggestions, we can talk through the reasoning behind each one. It feels more like teaching them to fish instead of giving them a grammatically correct fish.
Some of the suggestions are very minor; punctuation, capitalisation, stuff like that. Where it gets really interesting is trying to figure out and explain why some sentence constructions feel better than others.
A fairly straightforward example is long sentences. Not all long sentences are bad, but the longer a sentence gets, the more it runs the risk of overwhelming the reader. So if there’s an opportunity to split one long sentence into two shorter sentences, I’ll usually recommend that.
Here’s an example from Chris’s post, Delivering training remotely – the same yet different. The original sentence read:
I recently had the privilege of running some training sessions on product design and research techniques with the design team at Duck Duck Go.
There’s nothing wrong with that. But maybe this is a little easier to digest:
I recently had the privilege of running some training sessions with the design team at Duck Duck Go. We covered product design and research techniques.
Perhaps this is kind of like the single responsibility principle in programming. Whereas the initial version was one sentence that conveyed two pieces of information (who the training was with and what the training covered), the final version has a separate sentence for each piece of information.
I wouldn’t take that idea too far though. Otherwise you’d end up with something quite stilted and robotic.
Speaking of sounding robotic, I’ve noticed that people sometimes avoid using contractions when they’re writing online: “there is” instead of “there’s” or “I am” instead of “I’m.” Avoiding contractions seems to be more professional, but actually it makes the writing a bit too formal. There’s a danger of sounding like a legal contract. Or a Vulcan.
Sometimes a long sentence can’t be broken down into shorter sentences. In that case, I watch out for how much cognitive load the sentence is doling out to the reader.
Here’s an example from Maite’s post, How to engage the right people when recruiting in house for research. One sentence initially read:
The relevance of the people you invite to participate in a study and the information they provide have a great impact on the quality of the insights that you get.
The verb comes quite late there. As a reader, until I get to “have a great impact”, I have to keep track of everything up to that point. Here’s a rephrased version:
The quality of the insights that you get depends on the relevance of the people you invite to participate in a study and the information they provide.
Okay, there are two changes there. First of all, the verb is now “depends on” instead of “have a great impact on.” I think that’s a bit clearer. Secondly, the verb comes sooner. Now I only have to keep track of the words up until “depends on”. After that, I can flush my memory buffer.
Here’s another changed sentence from the same article. The initial sentence read:
You will have to communicate at different times and for different reasons with your research participants.
I suggested changing that to:
You will have to communicate with your research participants at different times and for different reasons.
To be honest, I find it hard to explain why that second version flows better. I think it’s related to the idea of reducing dependencies. The subject “your research participants” is dependent on the verb “to communicate with.” So it makes more sense to keep them together instead of putting a subclause between them. The subclause can go afterwards instead: “at different times and for different reasons.”
Here’s one final example from Katie’s post, Service Designers don’t design services, we all do. One sentence initially read:
Understanding the relationships between these moments, digital and non-digital, and designing across and between these moments is key to creating a compelling user experience.
That sentence could be broken into shorter sentences, but it might lose some impact. Still, it can be rephrased so the reader doesn’t have to do as much work. As it stands, until the reader gets to “is key to creating”, they have to keep track of everything before that. It’s like the feeling of copying and pasting. If you copy something to the clipboard, you want to paste it as soon as possible. The longer you have to hold onto it, the more uncomfortable it feels.
So here’s the reworked version:
The key to creating a compelling user experience is understanding the relationships between these moments, digital and non-digital, and designing across and between these moments.
As a reader, I can digest and discard each of these pieces in turn:
Maybe I should’ve suggested “between these digital and non-digital moments” instead of “between these moments, digital and non-digital”. But then I worry that I’m intruding on the author’s style too much. With the finished sentence, it still feels like a rousing rallying cry in Katie’s voice, but slightly adjusted to flow a little easier.
I must say, I really, really enjoy being a content buddy. I know the word “editor” would be the usual descriptor, but I like how unintimidating “content buddy” sounds.
I am almost certainly a terrible content buddy to myself. Just as I ignore my own advice about preparing conference talks, I’m sure I go against my own editorial advice every time I blurt out a blog post here. But there’s one piece I’ve given to others that I try to stick to: write like you speak.
This is easily my favourite use of a machine learning algorithm.
Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.
The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:
Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).
None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.
The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).
Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.
Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).
There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.
In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.
But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.
With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.
If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete
attribute with a value of “one-time-code”. For a six-digit passcode, your input
element might look something like this:
<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">
With one small addition to one HTML element, you’ve saved users some tedious drudgery.
There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.
Let’s say your website is example.com and the text message you send reads:
Your one-time passcode is 123456.
Add this to the end of the text message:
@example.com #123456
So the full message reads:
Your one-time passcode is 123456.
@example.com #123456
The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code"
in your form and the user shouldn’t have to lift a finger.
I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.
It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.
Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known
directory on web servers.
You can add a URL for /.well-known/change-password
which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.
Oh, and on that page where users can update their password, the autocomplete
attribute is your friend again:
<input type="password" autocomplete="new-password">
If you want them to enter their current password first, use this:
<input type="password" autocomplete="current-password">
All of the things I’ve mentioned—the autocomplete
attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.
A fascinating crowdsourced project. You can read the backstory in this article in Wired magazine.
I think my co-workers are getting annoyed with me. Any time they use an acronym or initialism—either in a video call or Slack—I ask them what it stands for. I’m sure they think I’m being contrarian.
The truth is that most of the time I genuinely don’t know what the letters stand for. And I’ve got to that age where I don’t feel any inhibition about asking “stupid” questions.
But it’s also true that I really, really dislike acronyms, initialisms, and other kinds of jargon. They’re manifestations of gatekeeping. They demarcate in-groups from outsiders.
Of course if you’re in a conversation with an in-group that has the same background and context as you, then sure, you can use acronyms and initialisms with the confidence that there’s a shared understanding. But how often can you be that sure? The more likely situation—and this scales exponentially with group size—is that people have differing levels of inside knowledge and experience.
I feel sorry for anyone trying to get into the field of web performance. Not only are there complex browser behaviours to understand, there’s also a veritable alphabet soup of initialisms to memorise. Here’s a really good post on web performance by Harry, but notice how the initialisms multiply like tribbles as the post progresses until we’re talking about using CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. And fair play to Harry; he expands each initialism the first time he introduces it.
But are we really saving any time by saying FID instead of first input delay? I suspect that the only reason why the word “cumulative” precedes “layout shift” is just to make it into the three-letter initialism CLS.
Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.
Again, if you’re certain that you’re speaking to peers, that’s fine. But if you’re trying to communicate even a little more widely, then initialisms and abbreviations are obstacles to overcome. And once you’re in the habit of using the short forms, it gets harder and harder to apply context-shifting in your language. So the safest habit to form is to generally avoid using acronyms and initialisms.
Unnecessary initialisms are exclusionary.
Think about on-boarding someone new to your organisation. They’ve already got a lot to wrap their heads around without making them figure out what a TAM is. That’s a real example from Clearleft. We have a regular Thursday afternoon meeting. I call it the Thursday afternoon meeting. Other people …don’t.
I’m trying—as gently as possible—to ensure we’re not being exclusionary in our language. My co-workers indulge me, even it’s just to shut me up.
But here’s the thing. I remember many years back when a job ad went out on the Clearleft website that included the phrase “culture fit”. I winced and explained why I thought that was a really bad phrase to use—one that is used as code for “more people like us”. At the time my concerns were met with eye-rolls and chuckles. Now, as knowledge about diversity and inclusion has become more widespread, everyone understands that using a phrase like “culture fit” can be exclusionary.
But when I ask people to expand their acronyms and initialisms today, I get the same kind of chuckles. My aversion to abbreviations is an eccentric foible to be tolerated.
But this isn’t about me.