Journal tags: technology

58

sparkline

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Brandolini’s blockchain

I’ve already written about how much I enjoyed hosting Leading Design San Francisco last week.

All the speakers were terrific. Lola’s talk was particularly …um, interesting:

In this talk, Lola will share her adventures in the world of blockchain, the hostility she experienced in her first go-round in 2018, and why she’s chosen to head back to a technology that is going through its largest reputational and social crisis to date.

Wait …I was supposed to stand on stage and introduce a talk that was (at least partly) about blockchain? I have opinions.

As it turned out, Lola warned me that I’d be making an appearance in her talk. She was going to quote that blog post. Before the talk, I asked her how obnoxious I could be about blockchain in her intro. She told me to bring it.

So in the introduction, I deployed all the sarcasm I had in me and said:

Listen, we designers have a tendency to be over-critical of things sometimes. There are all these ideas that we dismiss: phrenology, homeopathy, flat-earthism …blockchain. Haters gonna hate.

I remember somebody asking online a while back, “Why the hate for web3?” And someone I know responded by saying “We hate it because we understand it.” I think there’s a lot of truth to that.

But look, just because blockchains are powering crypto ponzi schemes and N F fucking Ts, it’s worth remembering that it’s also simply a technology. It’s a technological solution in search of a problem.

To be fair, it’s still early days. After all, it’s only been over a decade now.

It’s like the law of instrument says; when all you have is a hammer, everything looks like a nail. Blockchain is like that. Except the hammer is also made of glass.

Anyway, Lola is going to defend the indefensible and talk about blockchain. One thing to keep in mind is this: remember when everyone was talking about “The Cloud”? And then it turned out that you could substitute the phrase “someone else’s server” for “The Cloud?” Well, every time you hear Lola say the word “blockchain”, I’d like you to mentally substitute the phrase “multiple copies of a spreadsheet.”

Please give an open mind and a warm welcome to Lola Oyelayo Pearson!

I got some laughs. I also got lots of gasps and pearl-clutching, as though I were saying something taboo. Welcome to San Francisco.

Lola gave as good as she got. I got a roasting in her talk.

And just to clarify, Lola and I are friends—this was a consensual smackdown.

There was a very serious point to Lola’s talk. Cryptobollocks and other blockchain-powered schemes have historically been very bro-y, and exploitative of non-bro communities. Lola wants to fight that trend.

I get it. But it reminds me a bit of the justifications you hear from people who go to work at Facebook claiming that they can do more good from the inside. Whatever helps you sleep at night.

The crux of Lola’s belief is this: blockchain technology is inevitable, therefore it is uncumbent on us as ethical designers to ensure that the technology is deployed in a way that empowers people instead of exploiting them.

But I take issue with the premise. Blockchain technology is not inevitable. That’s the worst kind of technological determinism. It’s defeatist. It’s a depressing view of “progress” driven not by people, but by technological forces beyond our control.

I refuse to accept that anti-humanist deterministic view.

In any case, for technological determinism to have any validity, there needs to be something to it. At least virtual reality and machine learning are based on some actual technologies. In the case of cryptobollocks, there is no there there. There is nothing except the hype, which is why you’ll see blockchain enthusiasts trying to ride the coattails of trending technologies in a logical fallacy that goes something like this:

  1. There are technologies that will be really big in the future,
  2. blockchain is a technology, therefore
  3. blockchain will be really big in the future.

Blockchain is bullshit. It isn’t even very clever bullshit. And it certainly isn’t inevitable.

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

In between

I was chatting with my new colleague Alex yesterday about a link she had shared in Slack. It was the Nielsen Norman Group’s annual State of Mobile User Experience report.

There’s nothing too surprising in there, other than the mention of Apple’s app clips and Google’s instant apps.

Remember those?

Me neither.

Perhaps I lead a sheltered existence, but as an iPhone user, I don’t think I’ve come across a single app clip in the wild.

I remember when they were announced. I was quite worried about them.

See, the one thing that the web can (theoretically) offer that native can’t is instant access to a resource. Go to this URL—that’s it. Whereas for a native app, the flow is: go to this app store, find the app, download the app.

(I say that the benefit is theoretical because the website found at the URL should download quickly—the reality is that the bloat of “modern” web development imperils that advantage.)

App clips—and instant apps—looked like a way to route around the convoluted install process of native apps. That’s why I was nervous when they were announced. They sounded like a threat to the web.

In reality, the potential was never fulfilled (if my own experience is anything to go by). I wonder why people didn’t jump on app clips and instant apps?

Perhaps it’s because what they promise isn’t desirable from a business perspective: “here’s a way for users to accomplish their tasks without downloading your app.” Even though app clips can in theory be a stepping stone to installing the full app, from a user’s perspective, their appeal is the exact opposite.

Or maybe they’re just too confusing to understand. I think there’s an another technology that suffers from the same problem: progressive web apps.

Hear me out. Progressive web apps are—if done well—absolutely amazing. You get all of the benefits of native apps in terms of UX—they even work offline!—but you retain the web’s frictionless access model: go to a URL; that’s it.

So what are they? Are they websites? Yes, sorta. Are they apps? Yes, sorta.

That’s confusing, right? I can see how app clips and instant apps sound equally confusing: “you can use them straight away, like going to a web page, but they’re not web pages; they’re little bits of apps.”

I’m mostly glad that app clips never took off. But I’m sad that progressive web apps haven’t taken off more. I suspect that their fates are intertwined. Neither suffer from technical limitations. The problem they both face is inertia:

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

True of progressive web apps. Equally true of app clips.

But when I was chatting to Alex, she made me look at app clips in a different way. She described a situation where somebody might need to interact with some kind of NFC beacon from their phone. Web NFC isn’t supported in many browsers yet, so you can’t rely on that. But you don’t want to make people download a native app just to have a quick interaction. In theory, an app clip—or instant app—could do the job.

In that situation, app clips aren’t a danger to the web—they’re polyfills for hardware APIs that the web doesn’t yet support!

I love having my perspective shifted like that.

The specific situations that Alex and I were discussing were in the context of museums. Musuems offer such interesting opportunities for the physical and the digital to intersect.

Remember the pen from Cooper Hewitt? Aaron spoke about it at dConstruct 2014—a terrific presentation that’s well worth revisiting and absorbing.

The other dConstruct talk that’s very relevant to this liminal space between the web and native apps is the 2012 talk from Scott Jenson. I always thought the physical web initiative had a lot of promise, but it may have been ahead of its time.

I loved the thinking behind the physical web beacons. They were deliberately dumb, much like the internet itself. All they did was broadcast a URL. That’s it. All the smarts were to be found at the URL itself. That meant a service could get smarter over time. It’s a lot easier to update a website than swap out a piece of hardware.

But any kind of technology that uses Bluetooth, NFC, or other wireless technology has to get over the discovery problem. They’re invisible technologies, so by default, people don’t know they’re even there. But if you make them too discoverable— intrusively announcing themselves like one of the commercials in Minority Report—then they’re indistinguishable from spam. There’s a sweet spot of discoverability right in the middle that’s hard to get right.

Over the past couple of years—accelerated by the physical distancing necessitated by The Situation—QR codes stepped up to the plate.

They still suffer from some discoverability issues. They’re not human-readable, so you can’t be entirely sure that the URL you’re going to go to isn’t going to be a Rick Astley video. But they are visible, which gives them an advantage over hidden wireless technologies.

They’re cheaper too. Printing a QR code sticker costs less than getting a plastic beacon shipped from China.

QR codes turned out to be just good enough to bridge the gap between the physical and digital for those one-off interactions like dining outdoors during a pandemic:

I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

Ironically, the nail in the coffin for app clips and instant apps might’ve been hammered in by Apple and Google when they built QR-code recognition into their camera software.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Mars distracts

A few years ago, I wrote about how much I enjoyed the book Aurora by Kim Stanley Robinson.

Not everyone liked that book. A lot of people were put off by its structure, in which the dream of interstellar colonisation meets the harsh truth of reality and the book follows where that leads. It pours cold water over the very idea of humanity becoming interplanetary.

But our own solar system is doable, right? I mean, Kim Stanley Robinson is the guy who wrote the Mars trilogy and 2312, both of which depict solar system colonisation in just a few centuries.

I wonder if the author might regret the way that some have taken his Mars trilogy as a sort of manual, Torment Nexus style. Kim Stanley Robinson is very much concerned with this planet in this time period, but others use his work to do the opposite.

But the backlash to Mars has begun.

Maciej wrote Why Not Mars:

The goal of this essay is to persuade you that we shouldn’t send human beings to Mars, at least not anytime soon. Landing on Mars with existing technology would be a destructive, wasteful stunt whose only legacy would be to ruin the greatest natural history experiment in the Solar System. It would no more open a new era of spaceflight than a Phoenician sailor crossing the Atlantic in 500 B.C. would have opened up the New World. And it wouldn’t even be that much fun.

Manu Saadia is writing a book about humanity in space, and he has a corresponding newsletter called Against Mars: Space Colonization and its Discontents:

What if space colonization was merely science-fiction, a narrative, or rather a meta-narrative, a myth, an ideology like any other? And therefore, how and why did it catch on? What is so special and so urgent about space colonization that countless scientists, engineers, government officials, billionaire oligarchs and indeed, entire nations, have committed work, ingenuity and treasure to make it a reality.

What if, and hear me out, space colonization was all bullshit?

I mean that quite literally. No hyperbole. Once you peer under the hood, or the nose, of the rocket ship, you encounter a seemingly inexhaustible supply of ghoulish garbage.

Two years ago, Shannon Stirone went into the details of why Mars Is a Hellhole

The central thing about Mars is that it is not Earth, not even close. In fact, the only things our planet and Mars really have in common is that both are rocky planets with some water ice and both have robots (and Mars doesn’t even have that many).

Perhaps the most damning indictment of the case for Mars colonisation is that its most ardent advocate turns out to be an idiotic small-minded eugenicist who can’t even run a social media company, much less a crewed expedition to another planet.

But let’s be clear: we’re talking here about the proposition of sending humans to Mars—ugly bags of mostly water that probably wouldn’t survive. Robots and other uncrewed missions in our solar system …more of that, please!

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

On reading

On Tyranny by Timothy Snyder is a very short book. Most of the time, this is a feature, not a bug.

There are plenty of non-fiction books I’ve read that definitely could’ve been much, much shorter. Books that have a good sensible idea, but one that could’ve been written on the back of a napkin instead of being expanded into an arbitrarily long form.

In the world of fiction, there’s the short story. I guess the equivelent in the non-fiction world is the essay. But On Tyranny isn’t an essay. It’s got chapters. They’re just really, really short.

Sometimes that brevity means that nuance goes out the window. What might’ve been a subtle argument that required paragraphs of pros and cons in another book gets reduced to a single sentence here. Mostly that’s okay.

The premise of the book is that Trump’s America is comparable to Europe in the 1930s:

We are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism. Our one advantage is that we might learn from their experience.

But in making the comparison, Synder goes all in. There’s very little accounting for the differences between the world of the early 20th century and the world of the early 21st century.

This becomes really apparent when it comes to technology. One piece of advice offered is:

Make an effort to separate yourself from the internet. Read books.

Wait. He’s not actually saying that words on screens are in some way inherently worse than words on paper, is he? Surely that’s just the nuance getting lost in the brevity, right?

Alas, no:

Staring at screens is perhaps unavoidable but the two-dimensional world makes little sense unless we can draw upon a mental armory that we have developed somewhere else. … So get screens out of your room and surround yourself with books.

I mean, I’m all for reading books. But books are about what’s in them, not what they’re made of. To value words on a page more than the same words on a screen is like judging a book by its cover; its judging a book by its atoms.

For a book that’s about defending liberty and progress, On Tyranny is puzzingly conservative at times.

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

TEDxBrighton 2022

I went to TEDxBrighton on Friday. I didn’t actually realise it was happening until just a couple of days beforehand, but I once I knew, I figured I should take advantage of it being right here in my own town.

All in all, it was a terrific day. The MCing by Adam Pearson was great—just the right mix of enthusiasm and tongue-in-cheek humour. The curation of the line-up worked well too. The day was broken up into four loosely-themed sections. As I’m currently in the process of curating an event myself, I can appreciate how challenging it is.

Each section opened with a musical act. Again, having been involved behind the scenes with many events myself, I was impressed by the audaciousness, just from a logistical perspective. It all went relatively smoothly.

The talks at a TED or TEDx event can be a mixed bag. You can have a scientist on stage distilling years of research into a succint message followed by someone talking nonsense about some pseudo-psychological self-help scheme. But at TEDxBrighton, we lucked out.

A highlight for me was Dr James Mannion talking about implementation science—something that felt directly applicable to design work. Victoria Jenkins was also terrific, and again, her points about inclusive design felt very relevant. And of course I really enjoyed the space-based talks by Melissa Thorpe and Bianca Cefalo. Now that I think about it, just about everyone was great: Katie Vincent, Lewis Wedlock, Dina Nayeri—they all wowed me.

With one exception. There was a talk that was supposed to be about the future of democracy. In reality it quickly veered into DAOs before descending into a pitch for crypto and NFTs. The call to action was literally for everyone in the audience to go out and get a crypto wallet and buy an NFT …using ethereum no less! We were exhorted to use an unbelievably wasteful and energy-intensive proof-of-work technology to get our hands on a receipt for a JPG …from the same stage that would later highlight the work of climate activists like Tommie Eaton. It was really quite disgusting. The fear-based message of the talk was literally about getting in on the scheme before it’s too late. At one point we were told to “do the research.” I’m surprised we weren’t all told that we’re “not going to make it.”

A disgraceful shill for a ponzi scheme would’ve ruined any other event. Fortunately the line-up at TEDxBrighton was so strong that one scam artist couldn’t torpedo the day. Just like crypto itself—and associated bollocks like NFTs and web3—it was infuriating to have to sit through it in the short term, but then it just faded away into insignificance. One desperate peddler of snake oil couldn’t make a dent in an otherwise great day.

Declarative design

I feel like in the past few years there’s been a number of web design approaches that share a similar mindset. Intrinsic web design by Jen; Every Layout by Andy and Heydon; Utopia by Trys and James.

To some extent, their strengths lie in technological advances in CSS: flexbox, grid, calc, and so on. But more importantly, they share an approach. They all focus on creating the right inputs rather than trying to control every possible output. Leave the final calculations for those outputs to the browser—that’s what computers are good at.

As Andy puts it:

Be the browser’s mentor, not its micromanager.

Reflecting on Utopia’s approach, Jim Nielsen wrote:

We say CSS is “declarative”, but the more and more I write breakpoints to accommodate all the different ways a design can change across the viewport spectrum, the more I feel like I’m writing imperative code. At what quantity does a set of declarative rules begin to look like imperative instructions?

In contrast, one of the principles of Utopia is to be declarative and “describe what is to be done rather than command how to do it”. This approach declares a set of rules such that you could pick any viewport width and, using a formula, derive what the type size and spacing would be at that size.

Declarative! Maybe that’s the word I’ve been looking for to describe the commonalities between Utopia, Every Layout, and intrinsic web design.

So if declarative design is a thing, does that also mean imperative design is also a thing? And what might the tools and technologies for imperative design look like?

I think that Tailwind might be a good example of an imperative design tool. It’s only about the specific outputs. Systematic thinking is actively discouraged; instead you say exactly what you want the final pixels on the screen to be.

I’m not saying that declarative tools—like Utopia—are right and that imperative tools—like Tailwind—are wrong. As always, it depends. In this case, it depends on the mindset you have.

If you agree with this statement, you should probably use an imperative design tool:

CSS is broken and I want my tools to work around the way CSS has been designed.

But if you agree with this statement, you should probably use a declarative design tool:

CSS is awesome and I want my tools to amplify the way that CSS had been designed.

If you agree with the first statement but you then try using a declarative tool like Utopia or Every Layout, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

Likewise if you agree with the second statement but you then try using an imperative tool like Tailwind, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

It all depends on whether the philosophy behind the tool matches your own philosophy. If those philosophies match up, then using the tool will be productive and that tool will act as an amplifier—a bicycle for the mind. But if the philosophy of the tool doesn’t match your own philosophy, then you will be fighting the tool at every step—it will slow you down.

Knowing that this spectrum exists between declarative tools and imperative tools can help you when you’re evaluating technology. You can assess whether a web design tool is being marketed on the premise that CSS is broken or on the premise that CSS is awesome.

I wonder whether your path into web design and development might also factor into which end of the spectrum you’d identify with. Like, if your background is in declarative languages like HTML and CSS, maybe intrisic web design really resonates. But if your background is in imperative languages like JavaScript, perhaps Tailwind makes more sense to you.

Again, there’s no right or wrong here. This is about matching the right tool to the right mindset.

Personally, the declarative design approach fits me like a glove. It feels like it’s in the tradition of John’s A Dao Of Web Design or Ethan’s Responsive Web Design—ways of working with the grain of the web.

Today, the distant future

It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.

Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.

In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.

It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.

The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.

But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).

The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:

God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.

That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).

I had a different reaction to Jeff, as I wrote in 2010:

Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.

But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:

If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.

Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?

(I’m re-reading that article in the current version of Firefox: 95.0.2.)

Brian Veloso wrote on his site:

Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?

From the comments on Jeff’s post, there’s Corey Dutson:

2022: God knows what the Internet will look like at that point. Will we even have websites?

Dan Rubin, who has indeed successfully moved from web work to photography, wrote:

I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.

Joshua Works made a prediction that’s worryingly close to reality:

I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.

Someone with the moniker Grand Caveman wrote:

In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.

Perhaps the most level-headed observation came from Jonny Axelsson:

The world in 2022 will be pretty much like the world in 2009.

The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.

The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.

Now that’s a sensible perspective!

So who else is looking forward to seeing what the World Wide Web is like in 2036?

I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.

Speaking of long bets…

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Memories of Ajax

I just finished watching The Billion Dollar Code, a German language miniseries on Netflix. It’s no Halt and Catch Fire, but it combines ’90s nostalgia, early web tech, and an opportunity for me to exercise my German comprehension.

It’s based on a true story, but with amalgamated characters. The plot, which centres around the preparation for a court case, inevitably invites comparison to The Social Network, although this time the viewpoint is from that of the underdogs trying to take on the incumbent. The incumbent is Google. The underdogs are ART+COM, artist-hackers who created the technology later used by Google Earth.

Early on, one of the characters says something about creating a one-to-one model of the whole world. That phrase struck me as familiar…

I remember being at the inaugural Future Of Web Apps conference in London back in 2006. Discussing the talks with friends afterwards, we all got a kick out of the speaker from Google, who happened to be German. His content and delivery was like a wonderfully Stranglovesque mad scientist. I’m sure I remember him saying something like “vee made a vun-to-vun model of the vurld.”

His name was Steffen Meschkat. I liveblogged the talk at the time. Turns out he was indeed on the team at ART+COM when they created Terravision, the technology later appropriated by Google (he ended up working at Google, which doesn’t make for as exciting a story as the TV show).

His 2006 talk was all about Ajax, something he was very familiar with, being on the Google Maps team. The Internet Archive even has a copy of the original audio!

It’s easy to forget now just how much hype there was around Ajax back then. It prompted me to write a book about combining Ajax and progressive enhancement.

These days, no one talks about Ajax. But that’s not because the technology went away. Quite the opposite. The technology became so ubiquituous that it no longer even needs a label.

A web developer today might ask “what’s Ajax?” in the same way that a fish might ask “what’s water?”

The cage

I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.

But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.

This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”

I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:

  1. GET requests shouldn’t have side effects. Adding to a dossier on someone’s browsing habits definitely counts as a side effect.
  2. It is literally a fundamental principle of the web platform that it should be safe to visit a web page.

But Peter describes ubiquitous surveillance as a feature, not a bug:

It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.

He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?

When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.

Peter uses the metaphor of a record shop:

“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.

But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.

What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.

The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.

And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.

Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.

Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?

It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.

Principles and the English language

I work with words. Sometimes they’re my words. Sometimes they’re words that my colleagues have written:

One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.

I also work with web technologies, usually front-of-the-front-end stuff. HTML, CSS, and JavaScript. The technologies that users experience directly in web browsers.

I think a lot about design principles for the web. The two principles I keep coming back to are the robustness principle and the principle of least power.

When it comes to words, the guide that I return to again and again is George Orwell, specifically his short essay, Politics and the English Language.

Towards the end, he offers some rules for writing.

  1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.

These look a lot like design principles. Not only that, but some of them look like specific design principles. Take the robustness principle:

Be conservative in what you send, be liberal in what you accept.

That first part applies to Orwell’s third rule:

If it is possible to cut a word out, always cut it out.

Be conservative in what words you send.

Then there’s the principle of least power:

Choose the least powerful language suitable for a given purpose.

Compare that to Orwell’s second rule:

Never use a long word where a short one will do.

That could be rephrased as:

Choose the shortest word suitable for a given purpose.

Or, going in the other direction, the principle of least power could be rephrased in Orwell’s terms as:

Never use a powerful language where a simple language will do.

Oh, I like that! I like that a lot.

The principle of most availability

I’ve been thinking some more about the technical experience of booking a vaccination apointment and how much joy it brought me.

I’ve written before about how I’ve got a blind spot for the web so it’s no surprise that I was praising the use of a well marked-up form, styled clearly, and unencumbered by unnecessary JavaScript. But other technologies were in play too: Short Message Service (SMS) and email.

All of those technologies are platform-agnostic.

No matter what operating system I’m using, or what email software I’ve chosen, email works. It gets more complicated when you introduce HTML email. My response to that is the same as the old joke; you know the one: “Doctor, it hurts when I do this.” (“Well, don’t do that.”)

No matter what operating system my phone is using, SMS works. It gets more complicated when you introduce read receipts, memoji, or other additions. See my response to HTML email.

Then there’s the web. No matter what operating system I’m using on a device that could be a phone or a tablet or a laptop or desktop tower, and no matter what browser I’ve chosen to use, the World Wide Web works.

I originally said:

It feels like the principle of least power in action.

But another way of rephrasing “least power” is “most availability.” Technologies that are old, simple, and boring tend to be more widely available.

I remember when software used to come packaged in boxes and displayed on shelves. The packaging always had a list on the side. It looked like the nutritional information on a food product, but this was a list of “system requirements”: operating system, graphics card, sound card, CPU. I never liked the idea of system requirements. It felt so …exclusionary. And for me, the promise of technology was liberation and freedom to act on my own terms.

Hence my soft spot for the boring and basic technologies like email, SMS, and yes, web pages. The difference with web pages is that you can choose to layer added extras on top. As long as the fundamental functionality is using universally-supported technology, you’re free to enhance with all the latest CSS and JavaScript. If any of it fails, that’s okay: it falls back to a nice solid base.

Alas, many developers don’t build with this mindset. I mean, I understand why: it means thinking about users with the most boring, least powerful technology. It’s simpler and more exciting to assume that everyone’s got a shared baseline of newer technology. But by doing that, you’re missing out on one of the web’s superpowers: that something served up at the same URL with the same underlying code can simultaneously serve people with older technology and also provide a whizz-bang experience to people with the latest and greatest technology.

Anyway, I’ve been thinking about the kind of communication technologies that are as universal as email, SMS, and the web.

QR codes are kind of heading in that direction, although I still have qualms because of their proprietary history. But there’s something nice and lo-fi about them. They’re like print stylesheets in reverse (and I love print stylesheets). A funky little bridge between the physical and the digital. I just wish they weren’t so opaque: you never know if scanning that QR code will actually take you to the promised resource, or if you’re about to rickroll yourself.

Telephone numbers kind of fall into the same category as SMS, but with the added option of voice. I’ve always found the prospect of doing something with, say, Twilio’s API more interesting than building something inside a walled garden like Facebook Messenger or Alexa.

I know very little about chat apps or voice apps, but I don’t think there’s a cross-platform format that works with different products, right? I imagine it’s like the situation with native apps which require a different codebase for each app store and operating system. And so there’s a constant stream of technologies that try to fulfil the dream of writing once and running everywhere: React Native, Flutter.

They’re trying to solve a very clear and obvious problem: writing the same app more than once is really wasteful. But that’s the nature of the game when it comes to runtime-specific apps. The only alternative is to either deliberately limit your audience …or apply the principle of least power/most availability.

The wastefulness of having to write the same app for multiple platforms isn’t the only thing that puts me off making native apps. The exclusivity works in two directions. There’s the exclusive nature of the runtime that requires a bespoke codebase. There’s also the exclusive nature of the app store. It feels like a return to shelves of packaged software with strict system requirements. You can’t just walk in and put your software on the shelf. That’s the shopkeeper’s job.

There is no shopkeeper for the World Wide Web.