Readability Guidelines
Imagine a collaboratively developed, universal content style guide, based on usability evidence.
Imagine a collaboratively developed, universal content style guide, based on usability evidence.
Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.
“I wish to God these calculations had been executed by steam!” they cry.
The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.
His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.
But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?
I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.
Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.
I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.
You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.
I’ll give you an example based on my own experience.
Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”
Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.
Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.
A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.
Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”
Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.
Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”
Sounds like a job for machines.
This free event is running online from 3pm to 7pm UK time this Friday. The line-up features Emily Bender, Safiya Noble, Timnit Gebru and more.
Since the publication of On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜 two years ago, many of the harms the paper has warned about and more, have unfortunately occurred. From exploited workers filtering hateful content, to an engineer claiming that chatbots are sentient, the harms are only accelerating.
Join the co-authors of the paper and various guests to reflect on what has happened in the last two years, what the large language model landscape currently look like, and where we are headed vs where we should be headed.
A very handy guide to considering privacy at all stages of digital product design:
This guidance is written for technology professionals such as product and UX designers, software engineers, QA testers, and product managers.
Y’know, I started reading this great piece by Claire L. Evans thinking about its connections to systems thinking, but I ended up thinking more about prototyping. And microbes.
Dave rounds up some of the acronymtastic ways of scoping your CSS now that we’ve got a whole new toolkit at our disposal.
If your goal is to reduce specificity, new native CSS tools make reducing specificity a lot easier. You can author your CSS with near-zero specificity and even control the order in which your rules cascade.
Deleting your old thoughts may be giving your older self a kick they really don’t deserve. And the beauty of having an archive is that you don’t need to decide whether you were right or not. Your views, with a date attached, can stand as a reflection of a specific moment in time.
Reconciling every past view you’ve ever had with how you feel now isn’t required. It sounds exhausting, frankly.
Following on from my recently-lost long bet, this is a timely bit of data spelunking from Brian analysing the linkrot of 1400 links over 18 years of time.
More of the whimsical web!
A collection of truly personal sites.
This site is meant to showcase how a more personal web could look like, and hopefully give you some inspiration to make your own corner of the web a bit weirder.
Of course Cassie’s site is included!
Tiny ruins.
Fellow Brightonians, here’s a handy one-stop shop for all the places doing deliveries right now, generated from this spreadsheet by Chris Boakes.
- Is this really a problem?
- Does the problem need to be solved?
- Does the problem need to be solved now?
- Does the problem need to be solved by me?
- Is there a simpler problem I can solve instead?
Tantek documents the features he wants his posting interface to have.
Hidde gives an in-depth explanation of the Accessibility Object Model, coming soon to browsers near you:
In a way, that’s a bit like what Service Workers do for the network and Houdini for style: give developers control over something that was previously done only by the browser.
Impressively lightweight and smooth!
This well-researched in-depth piece doesn’t paint a pretty picture for archiving online news:
Of the 21 news organizations in our study, 19 were not taking any protective steps at all to archive their web output. The remaining two lacked formal strategies to ensure that their current practices have the kind of longevity to outlast changes in technology.
One facet of this whole CSS debate involves one side saying, “Just learn CSS” and the other side responding, “That’s what I’ve been trying to do!”
I think it’s high time we the teachers of CSS start discussing how exactly we can teach a correct mental model. How do we, in specific and practical ways, help developers get past this point of frustration. Because we have not figured out how to properly teach a mental model of CSS.
These are good challenges to think about. Almost all of them are user-focused, and there’s a refreshing focus away from reaching for a library:
It’s tempting to read about these problems with a particular view library or a data fetching library in mind as a solution. But I encourage you to pretend that these libraries don’t exist, and read again from that perspective. How would you approach solving these issues?
I can never keep these straight—this is going to be a handy reference to keep on hand.