Back in 2014 Vitaly asked me if I’d be the host for Smashing Conference in Freiburg. I jumped at the chance. I thought it would be an easy gig. All of the advantages of speaking at a conference without the troublesome need to actually give a talk.
It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered.
Last month, I hosted an other event, but this time it was online: UX Fest. Doing the post-talk interviews was definitely a little weirder online. It’s not quite the same as literally sitting down with someone. But the online nature of the event did provide one big advantage…
To minimise technical hitches on the day, and to ensure that the talks were properly captioned, all the speakers recorded their talks ahead of time. That meant I had an opportunity to get a sneak peek at the talks and prepare questions accordingly.
UX Fest had a day of talks every Thursday in June. There were four talks per Thursday. I started prepping on the Monday.
First of all, I just watched all the talks and let them wash me over. At this point, I’d often think “I’m not sure if I can come up with any questions for this one!” but I’d let the talks sit there in my subsconscious for a while. This was also a time to let connections between talks bubble up.
Then on the Tuesday and Wednesday, I went through the talks more methodically, pausing the video every time I thought of a possible question. After a few rounds of this, I inevitably ended up with plenty of questions, some better than others. So I then re-ordered them in descending levels of quality. That way if I didn’t get to the questions at the bottom of the list, it was no great loss.
In theory, I might not get to any of my questions. That’s because attendees could also ask questions on the day via a chat window. I prioritised those questions over my own. Because it’s not about me.
On some days there was a good mix of audience questions and my own pre-prepared questions. On other days it was mostly my own questions.
Either way, it was important that I didn’t treat the interview like a laundry list of questions to get through. It was meant to be a conversation. So the answer to one question might touch on something that I had made a note of further down the list, in which case I’d run with that. Or the conversation might go in a really interesting direction completely unrelated to the questions or indeed the talk.
Above all, these segments needed to be engaging and entertaining in a personable way, more like a chat show than a post-game press conference. So even though I had done lots of prep for interviewing each speaker, I didn’t want to show my homework. I wanted each interview to feel like a natural flow.
To quote the old saw, this kind of spontaneity takes years of practice.
There was an added complication when two speakers shared an interview slot for a joint Q&A. Not only did I have to think of questions for each speaker, I also had to think of questions that would work for both speakers. And I had to keep track of how much time each person was speaking so that the chat wasn’t dominated by one person more than the other. This was very much like moderating a panel, something that I enjoy very much.
Y’know, there are not many things I’m really good at. I’m a mediocre developer, and an even worse designer. I’m okay at writing. But I’m really good at public speaking. And I think I’m pretty darn good at this hosting lark too.
This looks interesting: a free one-day Barcamp-like event online all about design systems for the public sector, organised by the Gov.uk design system team:
If you work on public sector services and work with design systems, you’re welcome to attend. We even have some tickets for people who do not work in the public sector. If you love design systems, we’re happy to have you!
Steph and I had already colluded ahead of time on how we were going to split up the talks. She would go narrow and dive into one specific subgenre, solarpunk. I would go broad and give a big picture overview of science fiction literature.
Obviously I couldn’t possibly squeeze the entire subject of sci-fi into one short talk, so all I could really do was give my own personal subjective account. Hence, the talk is called Sci-fi and Me. I’ve published the transcript, uploaded the slides and the audio, and Marc has published the video on YouTube and Vimeo. Kudos to Tina Pham for going above and beyond to deliver a supremely accurate transcript with a super-fast turnaround.
I divided the talk into three sections. The first is my own personal story of growing up in small-town Ireland and reading every sci-fi book I could get my hands on from the local library. The second part was a quick history of sci-fi publishing covering the last two hundred years. The third and final part was a run-down of ten topics that sci-fi deals with. For each topic, I gave a brief explanation, mentioned a few books and then chose one that best represents that particular topic. That was hard.
Planetary romance. I mentioned the John Carter books of Edgar Rice Burroughs, the Helliconia trilogy by Brian Aldiss, and the Riverworld saga by Philip José Farmer. I chose Dune by Frank Herbert.
Space opera. I mentioned the Skylark and Lensman books by E.E. ‘Doc’ Smith, the Revelation Space series by Alastair Reynolds, and the Machineries of Empire books by Yoon Ha Lee. I chose Ancillary Justice by Ann Leckie.
Dystopia. I mentioned The Handmaid’s Tale by Margaret Atwood and Fahrenheit 451 by Ray Bradbury. I chose 1984 by George Orwell.
Post-apocalypse. I mentioned The Drought and The Drowned World by J.G. Ballard, Day Of The Triffids by John Wyndham, The Road by Cormac McCarthy, and Oryx and Crake by Margaret Atwood. I chose Station Eleven by Emily St. John Mandel.
Artificial intelligence. I mentioned Machines Like Me by Ian McEwan and Klara And The Sun by Kazuo Ishiguro. I chose I, Robot by Isaac Asimov.
First contact. I mentioned The War Of The Worlds by H.G. Wells, Childhood’s End and Rendezvous With Rama by Arthur C. Clarke, Solaris by Stanislaw Lem, and Contact by Carl Sagan. I chose Stories Of Your Life And Others by Ted Chiang.
Time travel. I mentioned The Time Machine by H.G. Wells, The Shining Girls by Lauren Beukes, and The Peripheral by William Gibson. I chose Kindred by Octavia Butler.
Okay, that’s eleven, not ten, but that last one is a bit of a cheat—it’s a subgenre rather than a topic. But it allowed me to segue nicely into Steph’s talk.
Here’s a list of those eleven books. I can recommend each and every one of them. Still, the problem with going with this topic-based approach was that some of my favourite sci-fi books of all time fall outside of any kind of classification system. Where would I put The Demolished Man by Alfred Bester, one of my all-time favourites? How could I classify Philip K. Dick books like Ubik, The Three Stigmata Of Palmer Eldritch, or A Scanner Darkly? And where would I even begin to describe the books of Christopher Priest?
But despite the inevitable gaps, I’m really pleased with how the overall talk turned out. I had a lot of fun preparing it and even more fun presenting it. It made a nice change from the usual topics I talk about. Incidentally, if you’ve got a conference or a podcast and you ever want me to talk about something other than the web, I’m always happy to blather on about sci-fi.
I’m going to talk about sci-fi, in general. Of course, there isn’t enough time to cover everything, so I’ve got to restrict myself.
First of all, I’m just going to talk about science fiction literature. I’m not going to go into film, television, games, or anything like that. But of course, in the discussion, I’m more than happy to talk about sci-fi films, television, and all that stuff. But for brevity’s sake, I thought I’ll just stick to books here.
Also, I can’t possibly give an authoritative account of all of science fiction literature, so it’s going to be very subjective. I thought what I can talk about is myself. In fact, it’s one of my favourite subjects.
So, that’s what I’m going to do. I’m going to talk about sci-fi and me.
So, let me tell you about my childhood. I grew up in a small town on the south coast of Ireland called Cobh. Here it is. It’s very picturesque when you’re looking at it from a distance. But I have to say, growing up there (in the 1970s and 1980s), there really wasn’t a whole lot to do.
There was no World Wide Web at this point. It was, frankly, a bit boring.
But there was one building in town that saved me, and that was this building here in the town square. This is the library. It was inside the library (amongst the shelves of books) that I was able to pass the time and find an escape.
It was here that I started reading the work, for example, of Isaac Asimov, a science fiction writer. He’s also a science writer. He wrote a lot of books. I think it might have even been a science book that got me into Isaac Asimov.
I was a nerdy kid into science, and I remember there was a book in the library that was essays and short stories. There’d be an essay about science followed by a short story that was science fiction, and it would keep going like that. It was by Isaac Asimov. I enjoyed those science fiction stories as much as the science, so I started reading more of his books, books about galactic empires, books about intelligent robots, detective stories but set on other planets.
There was a real underpinning of science to these books, hard science, in Isaac Asimov’s work. I enjoyed it, so I started reading other science fiction books in the library. I found these books by Arthur C. Clarke, which were very similar in some ways to Isaac Asimov in the sense that they’re very grounded in science, in the hard science.
In fact, the two authors used to get mistaken for one another in terms of their work. They formed an agreement. Isaac Asimov would graciously accept a compliment about 2001: A Space Odyssey and Arthur C. Clarke would graciously accept a compliment about the Foundation series.
Anyway, so these books, hard science fiction books, I loved them. I was really getting into them. There were plenty of them in the local library.
The other author that seemed to have plenty of books in the local library was Ray Bradbury. This tended to be more short stories than full-length novels and also, it was different to the Isaac Asimov and Arthur C. Clarke in the sense that it wasn’t so much grounded in the science. You got the impression he didn’t really care that much about how the science worked. It was more about atmosphere, stories, and characters.
These were kind of three big names in my formative years of reading sci-fi. I kind of went through the library reading all of the books by Isaac Asimov, Arthur C. Clarke, and Ray Bradbury.
Once I had done that, I started to investigate other books that were science fiction (in the library). I distinctly remember these books being in the library by Ursula K. Le Guin, The Left Hand of Darkness, and The Dispossessed. I read them and I really enjoyed them. They are terrific books.
These, again, are different to the hard science fiction of something like Isaac Asimov and Arthur C. Clarke. There were questions of politics and gender starting to enter into the stories.
Also, I remember there were two books by Alfred Bester, these two books, The Demolished Man and Tiger! Tiger! (also called The Stars My Destination). These were just wild. These were almost psychedelic.
I mean they were action-packed, but also, the writing style was action-packed. It was kind of like reading the Hunter S. Thompson of science fiction. It was fear and loathing in outer space.
These were opening my mind to other kinds of science fiction, and I also had my mind opened (and maybe warped) by reading the Philip K. Dick books that were in the library. Again, you got the impression he didn’t really care that much about the technology or the science. It was all about the stuff happening inside people’s heads, questioning what reality is.
At this point in my life, I hadn’t yet done any drugs. But reading Philip K. Dick kind of gave me a taste, I think, of what it would be like to do drugs.
These were also names that loomed large in my early science fiction readings: Ursula K. Le Guin, Alfred Bester, and Philip K. Dick.
Then there were the one-offs in the library. I remember coming across this book by Frank Herbert called Dune, reading it, and really enjoying it. It was spaceships and sandworms, but also kind of mysticism and environmentalism, even.
I remember having my tiny little mind blown by reading this book of short stories by Fredric Brown. They’re kind of like typical Twilight Zone short stories with a twist in the tale. I just love that.
I think a lot of science fiction short stories can almost be the natural home for it because there is one idea explored fairly quickly. Short stories are really good for that.
I remember reading stories about the future. What would the world be like in the year 1999? Like in Harry Harrison’s Make Room! Make Room! A tale of overpopulation that we all had to look forward to.
I remember this book by Walter M. Miller, A Canticle for Leibowitz, which was kind of a book about the long now (civilisations rising and falling). Again, it blew my little mind as a youngster and maybe started an interest I have to this day in thinking long-term.
So, this is kind of the spread of the science fiction books I read as a youngster, and I kept reading books after this. Throughout my life, I’ve read science fiction.
I don’t think it’s that unusual to read science fiction. In fact, I think just about anybody who reads has probably read science fiction because everyone has probably read one of these books. Maybe they’ve read Brave New World or 1984, some Kurt Vonnegut like Slaughterhouse 5 or The Sirens of Titan, the Margaret Atwood books like The Handmaid’s Tale, or Kazuo Ishiguro books.
Now, a lot of the time the authors of these books who are mainstream authors maybe wouldn’t be happy about having their works classified as sci-fi or science fiction. The term maybe was a little downmarket, so sometimes people will try to argue that these books are not science fiction even though clearly the premise of every one of these books is science fictional. But it’s almost like these books are too good to be science fiction. There’s a little bit of snobbishness.
Brian Aldiss has a wonderful little poem, a little couplet to describe this attitude. He said:
“SF is no good,” they cry until we’re deaf.
“But this is good.”
“Well, then it’s not SF!”
Recently, I found out that there’s a term for these books by mainstream authors that cross over into science fiction, and these are called slipstream books. I think everyone at some point has read a slipstream science fiction book that maybe has got them interested in diving further into science fiction.
What is sci-fi?
Now, the question I’m really skirting around here is, what is sci-fi? I’m not sure I can answer that question.
Isaac Asimov had a definition. He said it’s that branch of literature which deals with the reaction of human beings to changes in science and technology. I think that’s a pretty good description of his books and the hard science fiction books of Arthur C. Clarke. But I don’t think that that necessarily describes some of the other authors I’ve mentioned, so it feels a little narrow to me.
Pamela Sargent famously said that science fiction is the literature of ideas. There is something to that, like when I was talking about how short stories feel like a natural home for sci-fi because you’ve got one idea, you explore it in a short story, and you’re done.
But I also feel like that way of phrasing science fiction as the literature of ideas almost leaves something unsaid, like, it’s the literature of ideas as opposed to plot, characterisation, and all this other kind of stuff that happens in literature. I always think, why not both? You know. Why can’t we have ideas, plot, characters, and all the other good stuff?
Also, ideas aren’t unique to sci-fi. Every form of literature has to have some idea or there’s no point writing the book. Every crime novel has to have an idea behind it. So, I’m not sure if that’s a great definition either.
Maybe the best definition came from Damon Knight who said sci-fi is what we point to when we say it. It’s kind of, “I know it when I see it,” kind of thing. I think there’s something to that.
Any time you come up with a definition of sci-fi, it’s always hard to drive hard lines between sci-fi and other adjacent genres like fantasy. They’re often spoke about together, sci-fi and fantasy. I think I can tell the difference between sci-fi and fantasy, but I can’t describe the difference. I don’t think there is a hard line.
Science fiction feels like it’s looking towards the future, even when it isn’t. Maybe the sci-fi story isn’t actually set in the future. But it feels like it’s looking to the future and asking, “What if?” whereas fantasy feels like it’s looking to the past and asking, “What if?” But again, fantasy isn’t necessarily set in the past, and science fiction isn’t necessarily set in the future.
You could say, “Oh, well, science fiction is based on science, and fantasy is based on magic,” but any sci-fi book that features faster than light travel is effectively talking about magic, not science. So, again, I don’t think you can draw those hard lines.
There are other genres that are very adjacent and cross over with sci-fi and fantasy, like horror. You get sci-fi horror, fantasy horror. What about any mainstream book that has magical realism to it? You could say that’s a form of fantasy or science fiction.
Ultimately, I think this question, “What is sci-fi?” is a really interesting question if you’re a publisher. It’s probably important for you to answer this question if you are a publisher. But if you are a reader, honestly, I don’t think it’s that important a question.
What is sci-fi for?
There’s another question that comes on from this, which is, “What is sci-fi for? What’s its purpose?” Is it propaganda for science, almost like the way Isaac Asimov is describing it?
Sometimes, it has been used that way. In the 1950s and ’60s, it was almost like a way of getting people into science. Reading science fiction certainly influenced future careers in science, but that feels like a very limiting way to describe a whole field of literature.
Is sci-fi for predicting the future? Most sci-fi authors would say, “No, no, no.” Ray Bradbury said, “I write science fiction not to predict the future, but to prevent it.” But there is always this element of trying to ask what if and play out the variables into the future.
Frederik Pohl said, “A good science fiction story should be able to predict not the automobile but the traffic jam,” which is kind of a nice way of looking at how it’s not just prediction.
Maybe thinking about sci-fi as literature of the future would obscure the fact that actually, most science fiction tends to really be about today or the time it’s published. It might be set in the future but, often, it’s dealing with issues of the day.
Ultimately, it’s about the human condition. Really, so is every form of literature. So, I don’t think there’s a good answer for this either. I don’t think there’s an answer for the question, “What is sci-fi for?” that you could put all science fiction into.
Okay, so we’re going to avoid the philosophical questions. Let’s get down to something a bit more straightforward. Let’s have a history of science fiction and science fiction literature.
Caveats again: this is going to be very subjective, just as, like, my history. It’s also going to be a very Western view because I grew up in Ireland, a Western country.
Where would I begin the history of science fiction? I could start with the myths and legends and religions of most cultures, which have some kind of science fiction or fantasy element to them. You know, the Bible, a work of fantasy.
But if I wanted to start with what I would think is the modern birth of the sci-fi novel, I think Mary Wollstonecraft Shelley’s Frankenstein or The Modern Prometheus could be said to be the first sci-fi novel and invents a whole bunch of tropes that we still use to this day: the mad scientist meddling with powers beyond their control.
It’s dealing with electricity, and I talked about how sci-fi is often about topics of the day, and this is when electricity is just coming on the scenes. There are all sorts of questions about the impact of electricity and science fiction is a way of exploring this.
Talking about reanimating the dead, also kind of talking about artificial intelligence. It set the scene for a lot of what was to come.
Later, in the 19th Century, in the 1860s, and then the 1890s, we have these two giants of early science fiction. In France, we have Jules Verne, and he’s writing books like 20,000 Leagues Under the Sea, From Earth to the Moon, and Journey to the Centre of the Earth, these adventure stories with technology often at the Centre of them.
Then in England, we have H.G. Wells, and he’s creating entire genres from scratch. He writes The Time Machine, War of the Worlds, The Invisible Man, The Island of Doctor Moreau.
Over in America, you’ve got Edgar Allan Poe mostly doing horror, but there’s definitely sci-fi or fantasy aspects to what he’s doing.
Now, as we get into the 20th Century, where sci-fi really starts to boom – even though the term doesn’t exist yet – is with the pulp fiction in the 1920s, 1930s. This is literally pulp paper that cheap books are written on. They were cheap to print. They were cheap for the authors, too. As in, the authors did not get paid much. People were just churning out these stories. There were pulp paperbacks and also magazines.
Hugo Gernsback, here in the 1920s, he was the editor of Amazing Stories, and he talked about scientification stories. That was kind of his agenda.
Then later, in the 1930s, John W. Campbell became the editor of Astounding Stories. In 1937, he changed the name of it from Astounding Stories to Astounding Science Fiction. This is when the term really comes to prominence.
He does have an agenda. He wants stories grounded in plausible science. He wants that hard kind of science.
What you have here, effectively, is yes the genre is getting this huge boost, but also you’ve got gatekeepers. You’ve got two old, white dude gatekeepers kind of deciding what gets published and what doesn’t. It’s setting the direction.
What happens next, though, is that a lot of science fiction does get published. A lot of good science fiction gets published in what’s known as the Golden Age of Science Fiction in the 1940s and 1950s. This, it turns out, is when authors like Isaac Asimov, Ray Bradbury, and Heinlein are publishing those early books I was reading in the library. I didn’t realise it at the time, but they were books from the Golden Age of Science Fiction.
This tended to be the hard science fiction. It’s grounded in technology. It’s grounded in science. There tend to be scientific explanations for everything in the books.
It’s all good stuff. It’s all enjoyable. But there’s an interesting swing of the pendulum in the 1960s and ’70s. This swing kind of comes from Europe, from the UK. This is known as the New Wave. That term was coined by Michael Moorcock in New Worlds magazine that he was the editor of.
It’s led by these authors like Brian Aldiss and J.G. Ballard where they’re less concerned with outer space and they’re more concerned with inner space: the mind, language, drugs, the inner world. It’s some exciting stuff, quite different to the hard science that’s come before.
Like I say, it started in Europe, but then there was also this wave of it in America, broadening the scope of what sci-fi could be. You got less gatekeeping and you got more new voices. You got Ursula K. Le Guin and Samuel R. Delaney expanding what sci-fi could be.
That trend continued into the 1980s when you began to see the rise of authors like Octavia Butler who, to this day, has a huge influence on Afrofuturism. You’re getting more and more voices. You’re getting a wider scope of what science fiction could be.
I think the last big widening of sci-fi happened in the 1980s with William Gibson. He practically invented (from scratch) the genre of cyberpunk. If Mary Shelley was concerned with electricity then, by the 1980s, we were all concerned with computers, digital networks, and technology.
The difference with cyberpunk is where the Asimov story or Clarke story might be talking about someone in a position of power (a captain or an astronaut) and how technology impacts them, cyberpunk is kind of looking at technology at the street level when the street finds its own uses for things. That was expanded into other things as well.
After the 1980s, we start to get the new weird. We get people like Jeff Noon, China Mieville, and Jeff VanderMeer writing stuff. Is it sci-fi? Is it fantasy? Who knows?
Which brings us up to today. Today, we have, I think, a fantastic range of writers writing a fantastic range of science fiction, like Ann Leckie with her Imperial Radch stories, N.K. Jemisin with the fantastic Broken Earth trilogy, Yoon Ha Lee writing Machineries of Empire, and Ted Chiang with terrific short stories and his collections like Exhalation. I wouldn’t be surprised if, in the future, we look back on now as a true Golden Age of Science Fiction where it is wider, there are more voices and, frankly, more interesting stories.
Okay, so on the home stretch, I want to talk about the subjects of science fiction, the topics that sci-fi tends to cover. I’m going to go through ten topics of science fiction, list off what the topic is, name a few books, and then choose one book to represent that topic. It’s going to be a little tricky, but here we go.
Okay, so planetary romance is a sci-fi story that’s basically set on a single planet where the planet is almost like a character: the environment of the planet, the ecosystem of the planet. This goes back a long way. The Edgar Rice Burroughs stories of John Carter of Mars were kind of early planetary romance and even spawned a little sub-genre of Sword and Planet*.
Brian Aldiss did a terrific trilogy called Helliconia, a series where the orbits of a star system are kind of the driving force behind the stories that take place over generations.
Philip Jose Farmer did this fantastic series (the Riverworld series). Everyone in history is reincarnated on this one planet with a giant river spanning it.
If I had to pick one planetary romance to represent the genre, I am going to go with a classic. I’m going to go with Dune by Frank Herbert. It really is a terrific piece of work.
Space opera, the term was intended to denigrate it but, actually, it’s quite fitting. Space opera is what you think of when you think of sci-fi. It’s intergalactic empires, space battles, and good rip-roaring yarns. You can trace it back to these early works by E.E. ’Doc’ Smith. It’s the good ol’ stuff.
Space opera has kind of fell out of favour for a while there, but it started coming back in the last few decades. It got some really great, hard sci-fi space opera by Alastair Reynolds and, more recently, Yoon Ha Lee with Ninefox Gambit – all good stuff.
But if I had to pick one space opera book to represent the genre, I’m going to go with Ancillary Justice by Ann Leckie. It is terrific. It’s like taking Asimov, Clarke, Ursula K. Le Guin, and the best of all of them, and putting them all into one series – great stuff.
Now, in space opera, generally, they come up with some way of being able to travel around the galaxy in a faster than light, warp speed, or something like that, which makes it kind of a fantasy, really.
If you accept that you can’t travel faster than light, then maybe you’re going to write about generation starships. This is where you accept that you can’t zip around the galaxy, so you have to take your time getting from star system to star system, which means it’s multiple generations.
Brian Aldiss’s first book was a generation starship book called Non-Stop. But there’s one book that I think has the last word on generation starships, and it’s by Kim Stanley Robinson. It is Aurora. I love this book, a really great book. Definitely the best generation starship book there is.
All right. What about writing about utopias? Funnily enough, not as many utopias as there are the counterpart. Maybe the most famous utopias in recent sci-fi is from Ian M. Banks with his Culture series. The Culture is a socialist utopia in space post-scarcity. They’re great space opera galaxy-spanning stuff.
What’s interesting, though, is most of the stories are not about living in a utopia because living in a post-scarcity utopia is, frankly, super boring. All the stories are about the edge cases. All the stories are literally called special circumstances.
All good fun, but the last word on utopian science fiction must go to Ursula Le Guin with The Dispossessed. It’s an anarcho-syndicalist utopia – or is it? It depends on how you read it.
I definitely have some friends who read this like it was a manual and other friends who read it like it was a warning. I think, inside every utopia, there’s a touch of dystopia, and dystopias are definitely the more common topic for science fiction. Maybe it’s easier to ask, “What’s the worst that can happen?” than to ask, “What’s the best that can happen?”
A lot of the slipstream books would be based on dystopias like Margaret Atwood’s terrific The Handmaid’s Tale. I remember being young and reading (in that library) Fahrenheit 541 by Ray Bradbury, a book about burning books – terrific stuff.
But I’m going to choose one. If I’m going to choose one dystopia, I think I have to go with a classic. It’s never been beat. George Orwell’s 1984, the last word on dystopias. It’s a fantastic work, fantastic piece of literature.
I think George Orwell’s 1984 is what got a lot of people into reading sci-fi. With me, it almost went the opposite. I was already reading sci-fi. But after reading 1984, I ended up going to read everything ever written by George Orwell, which I can highly recommend. There’s no sci-fi, but a terrific writer.
All right. Here’s another topic: a post-apocalypse story. You also get pre-apocalypse stories like, you know, there’s a big asteroid coming or there’s a black hole in the Centre of the Earth or something, and how we live out our last days. But, generally, authors tend to prefer post-apocalyptic settings, whether that’s post-nuclear war, post environmental catastrophe, post-plague. Choose your disaster and then have a story set afterward.
J. G. Ballard, he writes stories about not enough water, too much water, and I think it’s basically he wants to find a reason to put his characters in large, empty spaces because that’s what he enjoys writing about.
Very different, you’d have the post-apocalyptic stories of someone like John Wyndham, somewhat derided by Brian Aldiss’s cozy catastrophes. Yes, the world is ending, but we’ll make it back home in time for tea.
At the complete other extreme from that, you would have something like Cormac McCarthy’s The Road, which is relentlessly grim tale of post-apocalypse.
I almost picked Margaret Atwood’s Oryx and Crake trilogy for the ultimate post-apocalyptic story, and it’s really great stuff post-plague, genetically engineered plague – very timely.
But actually, even more timely – and a book that’s really stayed with me – is Station Eleven by Emily St. John Mandell. Not just because the writing is terrific and it is a plague book, so, yes, timely, but it also tackles questions like: What is art for? What is the human condition all about?
All right. Another topic that’s very popular amongst the techies, artificial intelligence, actual artificial intelligence, not what we in the tech world called artificial intelligence, which is a bunch of if/else statements.
Stories of artificial intelligence are also very popular in slipstream books from mainstream authors like recently we had a book from Ian McEwan. We had a new book from Kazuo Ishiguro tackling this topic.
But again, I’m going to go back to the classic, right back to my childhood, and I’ll pick I, Robot, a collection of short stories by Isaac Asimov, where he first raises this idea of three laws of robotics – a word he coined, by the way, robotics from the Czech word for robot.
These three laws are almost like design principles for artificial intelligence. All the subsequent works in this genre kind of push at those design principles. It’s good stuff. Not to be confused with the movie with the same name.
Here’s another topic: first contact with an alien species. Well, sometimes the first contact doesn’t go well and the original book on this is H.G. Wells The War of the Worlds. Every other alien invasion book since then has kind of just been a reworking of The War of the Worlds. It’s terrific stuff.
For more positive views on first contact stories, Arthur C. Clarke dives into books like Childhood’s End. In Rendezvous with Rama, what’s interesting is we don’t actually contact the alien civilisation but we have an artifact that we must decode and get information from. It’s good stuff.
More realistically, though, Solaris by Stanislaw Lem is frustrating because it’s realistic in the sense that we couldn’t possibly understand an alien intelligence. In the book – spoiler alert – we don’t.
For realism set in the world of today, Carl Sagan’s book Contact is terrific. Well worth a read. It really tries to answer what would a first contact situation look like today.
But I’ve got to pick one first contact story, and I’m actually going to go with a short story, and it’s Stories of Your Life by Ted Chaing. I recommend getting the whole book and reading every short story in it because it’s terrific.
This is the short story that the film Arrival was based on, which is an amazing piece of work because I remember reading this fantastic short story and distinctly thinking, “This is unfilmable. This could only exist in literature.” Yet, they did a great job with the movie, which bodes well for the movie of Dune, which is also being directed by Denis Villeneuve.
All right. Time travel as a topic. I have to say I think that time travel is sometimes better handled in media like TV and movies than it is in literature. That said, you’ve got the original time travel story. Again, H.G. Wells just made this stuff from scratch, and it really holds up. It’s a good book. I mean it’s really more about class warfare than it is about time travel, but it’s solid.
Actually, I highly recommend reading a nonfiction book called Time Travel by James Gleick where he looks at the history of time travel as a concept in both fiction and in physics.
You’ve got some interesting concepts like Lauren Beukes’s The Shining Girls, which, as is the premise, time-traveling serial killer, which is a really interesting mashup of genres. You’ve got evidence showing up out of chronological sequence.
By the way, this is being turned into a TV show as we speak, as is The Peripheral by William Gibson, a recent book by him. It’s terrific.
What I love about this, it’s a time travel story where the only thing that travels in time is information. But that’s enough with today’s technology, so it’s like a time travel for remote workers. Again, very timely, as all of William Gibson’s stuff tends to be.
But if I’ve got to choose one, I’m going to choose Kindred by Octavia Butler because it’s just such as a terrific book. To be honest, the time travel aspect isn’t the Centre of the story but it’s absolutely worth reading as just a terrific, terrific piece of literature.
Now, in time travel, you’ve generally got two kinds of time travel. You’ve got the closed-loop time travel, which is kind of like a Greek tragedy. You try and change the past but, in trying to change it, you probably bring about the very thing you were trying to change. The Shining Girls were something like that.
Or you have the multiverse version of time travel where going back in time forks the universe, and that’s what The Peripheral is about. That multiverse idea is explored in another subgenre, which is alternative history, which kind of asks, “What if something different had happened in history?” and then plays out the what-if from there. Counterfactuals, they’re also known as.
I remember growing up and going through the shelves of that library in Cobh, coming across this book, A Transatlantic Tunnel. Hurrah! by Harry Harrison. It’s set in a world where the American War of Independence failed and now it’s the modern-day. The disgraced descendant of George Washington is in charge of building a transatlantic tunnel for the British Empire.
That tends to be the kind of premise that gets explored in alternative history is what if another side had won the war. There’s a whole series of books set in a world where the South won the Civil War in the United States.
For my recommendation, though, I’m going to go with The Man in the High Castle, which is asking what if the other side won the war. In this case, it’s WWII. It’s by Philip K. Dick. I mean it’s not my favourite Philip K. Dick book, but my favourite Philip K. Dick books are so unclassifiable, I wouldn’t be able to put them under any one topic, and I have to get at least one Philip K. Dick book in here.
A final topic and, ooh, this is a bit of a cheat because it’s not really a topic – it’s a subgenre – cyberpunk. But as I said, cyberpunk deals with the topic of computers or networked computers more specifically, and there’s some good stuff like Neal Stephenson’s Snow Crash. Really ahead of its time. It definitely influenced a lot of people in tech.
Everyone I know that used to work in Linden Lab, the people who were making Second Life, when you joined, you’re basically handled Snow Crash on your first day and told, “This is what we’re trying to build here.”
But if I’ve got to pick one cyberpunk book, you can’t beat the original Neuromancer by William Gibson. Just terrific stuff.
What’s interesting about cyberpunk is, yes, it’s dealing with the technology of computers and networks, but it’s also got this atmosphere, a kind of noir atmosphere that William Gibson basically created from scratch. Then a whole bunch of other genres spun off from that asking, “Well, what if we could have a different atmosphere?” and explore stories like steampunk. It’s kind of like, “Well, what if the Victorians had computers and technology? What would that be like?”
Basically, if there’s a time in history that you like the aesthetic of, there’s probably a subgenre ending in the word “punk” that describes that aesthetic. You can go to conventions, and you can have your anime and your manga and your books and your games set in these kind of subgenres. They are generally, like I say, about aesthetics with the possible exception of solarpunk, which is what Steph is going to talk about.
Living in the future
I am going to finish with these books as my recommendations for a broad range of topics of science fiction from 50 years of reading science fiction. I think about if I could go back and talk to my younger self in that town in the south coast of Ireland about the world of today. I’m sure it would sound like a science fictional world.
By the way, I wouldn’t go back in time to talk to my younger self because I’ve read enough time travel stories to know that that never ends well. But still, here we are living in the future. I mean this past year with a global pandemic, that is literally straight out of a bunch of science fiction books.
But also, just the discoveries and advancements we’ve made are science fictional. Like when I was growing up and reading science books in that library, we didn’t know if there were any planets outside our own solar system. We didn’t know if exoplanets even existed.
Now, we know that most solar systems have their own planets. We’re discovering them every day. It’s become commonplace.
We have sequenced the human genome, which is a remarkable achievement for a species.
And we have the World Wide Web, this world-spanning network of information that you can access with computers in your pockets. Amazing stuff.
But of all of these advancements by our species, if I had to pick the one that I think is in some ways the most science-fictional, the most far-fetched idea, I would pick the library. If libraries didn’t exist and you tried to make them today, I don’t think you could succeed. You’d be laughed out of the venture capital room, like, “How is that supposed to work?” It sounds absolutely ridiculous, a place where people can go and read books and take those books home with them without paying for them. It sounds almost too altruistic to exist.
But Ray Bradbury, for example, I know he grew up in the library. He said, “I discovered me in the library. I went to find me in the library.” He was a big fan of libraries. He said, “Reading is at the Centre of our lives. The library is our brain. Without the library, you have no civilisation.” He said, “Without libraries, what have we? Have no past and no future.”
So, to end this, I’m not going to end with a call to read lots of sci-fi. I’m just going to end with a call to read – full stop. Read fiction, not just non-fiction. Read fiction. It’s a way of expanding your empathy.
And defend your local library. Use your local library. Don’t let your local library get closed down.
We are living in the future by having libraries. Libraries are science fictional.
Here’s a great write-up (with sketch notes) of last week’s conference portion of UX Fest:
There was a through-line of ethics through the whole conference that I enjoyed. The “design is the underdog” is tired and no longer true. I think that asking ourselves “now that we are here, how do we avoid causing harm?” is a much more mature conversation.
I have the great pleasure of hosting the event so not only do I get to see a whole lot of great talks, I also get to quiz the speakers afterwards.
Right from day one, a theme emerged that continued throughout the conference and I suspect will continue for the rest of the festival too. That topic was metrics. Kind of.
See, metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
People have tried to quantify user experience benefits using measurements like NetPromoter Score, which is about as useful as reading tea leaves or chicken entrails.
So we tend to equate user experience gains with business gains. That makes sense. Happy users should be good for business. That’s a reasonable hypothesis. But it gets tricky when you need to make the case for improving the user experience if you can’t tie it directly to some business metric. That’s when we run into the McNamara fallacy:
Making a decision based solely on quantitative observations (or metrics) and ignoring all others.
The way out of this quantitative blind spot is to use qualitative research. But another theme of UX Fest was just how woefully under-represented researchers are in most organisations. And even when you’ve gone and talked to users and you’ve got their stories, you still need to play that back in a way that makes sense to the business folks. These are stories. They don’t lend themselves to being converted into charts’n’graphs.
And so we tend to fall back on more traditional metrics, based on that assumption that what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.
Broadly speaking, the talks at UX Fest could be put into two categories. You’ve got talks covering practical subjects like product design, content design, research, growth design, and so on. Then you’ve got the higher-level, almost philosophical talks looking at the big picture and questioning the industry’s direction of travel.
The tension between these two categories was the highlight of the conference for me. It worked particularly well when there were back-to-back talks (and joint Q&A) featuring a hands-on case study that successfully pushed the needle on business metrics followed by a more cautionary talk asking whether our priorities are out of whack.
Using A/B tests alone is like using a loaded weapon without supervision. They only tell you what people do. And again, the solution is to make sure you’re also doing qualitative research—that’s how you find out why people are doing what they do.
But as I’ve pondered the lessons from last week’s conference, I’ve come to realise that there’s also a danger of focusing purely on the user experience. Hear me out…
At one point, the question came up as to whether deceptive dark patterns were ever justified. What if it’s for a good cause? What if the deceptive dark pattern is being used by an organisation actively campaigning to do good in the world?
In my mind, there was no question. A deceptive dark pattern is wrong, no matter who’s doing it.
(There’s also the problem of organisations that think they’re doing good in the world: I’m sure that every talented engineer that worked on Google AMP honestly believed they were acting in the best interests of the open web even as they worked to destroy it.)
Where it gets interesting is when you flip the question around.
Suppose you’re a designer working at an organisation that is decidedly not a force for good in the world. Say you’re working at Facebook, a company that prioritises data-gathering and engagement so much that they’ll tolerate insurrectionists and even genocidal movements. Now let’s say there’s talk in your department of implementing a deceptive dark pattern that will drive user engagement. But you, being a good designer who fights for the user, take a stand against this and you successfully find a way to ensure that Facebook doesn’t deploy that deceptive dark pattern.
Does that count as being a good user experience designer? Yes, you’ve done good work at the coalface. But the overall business goal is like a deceptive dark pattern that’s so big you can’t take it in. Is it even possible to do “good” design when you’re inside the belly of that beast?
Facebook is a relatively straightforward case. Anyone who’s still working at Facebook can’t claim ignorance. They know full well where that company’s priorities lie. No doubt they sleep at night by convincing themselves they can accomplish more from the inside than without. But what about companies that exist in the grey area of being imperfect? Frankly, what about any company that relies on surveillance capitalism for its success? Is it still possible to do “good” design there?
There are no easy answers and that’s why it so often comes down to individual choice. I know many designers who wouldn’t work at certain companies …but they also wouldn’t judge anyone else who chooses to work at those companies.
At Clearleft, every staff member has two levels of veto on client work. You can say “I’m not comfortable working on this”, in which case, the work may still happen but we’ll make sure the resourcing works out so you don’t have anything to do with that project. Or you can say “I’m not comfortable with Clearleft working on this”, in which case the work won’t go ahead (this usually happens before we even get to the pitching stage although there have been one or two examples over the years where we’ve pulled out of the running for certain projects).
Going back to the question of whether it’s ever okay to use a deceptive dark pattern, here’s what I think…
It makes no difference whether it’s implemented by ProPublica or Breitbart; using a deceptive dark pattern is wrong.
But there is a world of difference in being a designer who works at ProPublica and being a designer who works at Breitbart.
That’s what I’m getting at when I say there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals. Then designers are like the soldiers on the eve of battle in Henry V:
For we know enough, if we know we are the kings subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us.
The topic for the evening is science fiction. There’ll be a talk from me, a talk from Steph, and then a discussion, which I’m really looking forward to.
I got together with Steph last week, which was really fun—we could’ve talked for hours! We compared notes and figured out a way to divvy up the speaking slots. Steph is going to do a deep dive into one specific subgenre of sci-fi. So to set the scene, I’m going to give a broad but shallow overview of the history of sci-fi. To keep things managable, I’m only going to be talking about sci-fi literature (although we can get into films, TV, and anything else in the discussion afterwards).
But I don’t want to just regurgitate facts like a Wikipedia article. I’ve decided that the only honest thing to do is give my own personal history with sci-fi. Instead of trying to give an objective history, I’m going to tell a personal story …even if that means being more open and vulnerable.
I think I’ve got the arc of the story I want to tell. I’ve been putting slides together and I’m quite excited now. I’ve realised I’ve got quite a lot to say. But I don’t want the presentation to get too long. I want to keep it short and snappy so that there’s plenty of time for the discussion afterwards. That’s going to be the best part!
That’s where you come in. The discussion will be driven by the questions and chat from the attendees. Tickets are available on a pay-what-you-want basis, with a minimum price of just €10. It’ll be an evening event, starting at 6:30pm UK time, 7:30pm in central Europe. So if you’re in the States, that’ll be your morning or afternoon.
Come along if you have any interest in sci-fi. If you have no interest in sci-fi, then please come along—we can have a good discusison about it.
I enjoyed each and every one. I also had the pleasure of interviewing the speakers at every Responsive Day Out. Hosting events like that is a blast, but what with The Situation and all, there hasn’t been much opportunity for hosting conferences.
Well, I’m going to be hosting an event next month: UX Fest. It’s this year’s online version of UX London.
An online celebration of digital design, taking place throughout June 2021.
I am simultaneously excited and nervous. I’m excited because I’ll have the chance to interview a whole bunch of really smart people. I’m nervous because it’s all happening online and that might feel quite different to an in-person discussion.
But I have an advantage. While the interviews will be live, the preceding talks will be pre-recorded. That means I have to time watch and rewatch each talk, spot connections between them, and think about thought-provoking questions for each speaker.
So that’s what I’m doing between now and the beginning of June. If you’d like to bear witness to the final results, I encourage you to get a ticket for UX Fest. You can come to the three-day conference in the first week of June, or you can get a ticket for the festival spread out over the following three Thursdays in June, or you can get a combo ticket for both and save some money.
There’ll also be a whole bunch of hands-on masterclasses throughout June that you can book individually. I won’t be hosting those though. I’ll have plenty to keep me occupied hosting the conference and the festival.
I’ve been continuing my audio narration of Jay Hoffman’s excellent Web History series over on CSS tricks. We’re eight chapters in already! That’s a good few hours of audio—each chapter is over half an hour long.
The latest chapter was a joy to narrate. It’s all about the history of CSS so I remember many of the events that are mentioned, like when Tantek saved the web by implenting doctype switching (seriously, I honestly believe that if that hadn’t happened, CSS wouldn’t have “won”). Eric is in there. And Molly. And Elika. And Chris. And Dave.
I wrote about preparing this talk and you can see the outline on Kinopio. I thought it turned out well, but I never actually know until people see it. So I’m very gratified and relieved that it went down very well indeed. Phew!
Eric and the gang at An Event Apart asked for a round-up of links related to this talk and I was more than happy to oblige. I’ve separated them into some of the same categories that the talk covers.
I know that these look like a completely disconnected grab-bag of concepts—you’d have to see the talk to get the connections. But even without context, these are some rabbit holes you can dive down…
If you want to see the finished results, come along to An Event Apart Spring Summit on April 19th. To sweeten the deal, I’ve got a discount code you can use when you buy any multi-day pass: AEAJEREMY.
Recording the talk took longer than I thought it would. I think it was because I said this:
It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary.
Once I got that idea in my head, I think I became a lot fussier about the quality of the recording. “Would David Attenborough allow his documentaries to have the sound of a keyboard audibly being pressed? No! Start again!”
I’m pleased with the final results. And I’m really looking forward to the post-presentation discussion with questions from the audience. The talk gets provocative—and maye a bit ranty—towards the end so it’ll be interesting to see how people react to that.
It feels good to have the presentation finished, but it also feels …weird. It’s like the feeling that conference organisers get once the conference is over. You spend all this time working towards something and then, one day, it’s in the past instead of looming in the future. It can make you feel kind of empty and listless. Maybe it’s the same for big product launches.
The two big projects I’ve been working on for the past few months were this talk and season two of the Clearleft podcast. The talk is in the can and so is the final episode of the podcast season, which drops tomorrow.
On the one hand, it’s nice to have my decks cleared. Nothing work-related to keep me up at night. But I also recognise the growing feeling of doubt and moodiness, just like the post-conference blues.
The obvious solution is to start another big project, something on the scale of making a brand new talk, or organising a conference, or recording another podcast season, or even writing a book.
The other option is to take a break for a while. Seeing as the UK government has extended its furlough scheme, maybe I should take full advantage of it. I went on furlough for a while last year and found it to be a nice change of pace.
As it happens, I’m preparing a conference talk right now for delivery online. Am I taking my advice about how to put a talk together? I am on me arse.
Perhaps the most important part of the process I shared with Hana is that you don’t get too polished too soon. Instead you get everything out of your head as quickly as possible (probably onto disposable bits of paper) and only start refining once you’re happy with the rough structure you’ve figured out by shuffling those bits around.
But the way I’ve been preparing this talk has been more like watching a progress bar. I started at the start and even went straight into slides as the medium for putting the talk together.
It was all going relatively well until I hit a wall somewhere between the 50% and 75% mark. I was blocked and I didn’t have any rough sketches to fall back on. Everything was a jumbled mess in my brain.
It all came to a head at the start of last week when that jumbled mess in my brain resulted in a very restless night spent tossing and turning while I imagined how I might complete the talk.
This is a terrible way of working and I don’t recommend it to anyone.
The problem was I couldn’t even return to the proverbial drawing board because I hadn’t given myself a drawing board to return to (other than this crazy wall of connections on Kinopio).
My sleepless night was a wake-up call (huh?). The next day I forced myself to knuckle down and pump out anything even if it was shit—I could refine it later. Well, it turns out that just pumping out any old shit was exactly what I needed to do. The act of moving those fingers up and down on the keyboard resulted in something that wasn’t completely terrible. In fact, it turned out pretty darn good.
The idea here is to get everything out of my head.
I should’ve listened to that guy.
At this point, I think I’ve got the talk done. The progress bar has reached 100%. I even think that it’s pretty good. A giveaway for whether a talk is any good is when I find myself thinking “Yes, this has good points well made!” and then five minutes later I’m thinking “Wait, is this complete rubbish that’s totally obvious and doesn’t make much sense?” (see, for example, every talk I’ve ever prepared ever).
Now I just to have to record it. The way that An Event Apart are running their online editions is that the talks are pre-recorded but followed with live Q&A. That’s how the Clearleft events team have been running the conference part of the Leading Design Festival too. Last week there were three days of this format and it worked out really, really well. This week there’ll be masterclasses which are delivered in a more synchronous way.
It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary. I guess this is what life is like for YouTubers.
I think the last time I was in a cinema before The Situation was at the wonderful Duke of York’s cinema here in Brighton for an afternoon showing of The Proposition followed by a nice informal chat with the screenwriter, one Nick Cave, local to this parish. It was really enjoyable, and that’s kind of what Leading Design Festival felt like last week.
I wonder if maybe we’ve been thinking about online events with the wrong metaphor. Perhaps they’re not like conferences that have moved online. Maybe they’re more like film festivals where everyone has the shared experience of watching a new film for the first time together, followed by questions to the makers about what they’ve just seen.
Hana recounts the preparation she did for an online presentation, including some advice from me. I’m right in the middle of preparing my own online presentation right now, and I should really heed that advice. But I fear what I told Hana was “do as I say, not as I do.”
I really, really missed speaking at conferences in 2020. I managed to squeeze in just one meatspace presentation before everything shut down. That was in Nottingham, where myself and Remy reprised our double-bill talk, How We Built The World Wide Web In Five Days.
Giving a talk online is …weird. It’s very different from public speaking. The public is theoretically there but you feel like you’re just talking at your computer screen. If anything, it’s more like recording a podcast than giving a talk.
I’d like to take you back in time, just over 100 years ago, at the beginning of World War One. It’s 1914. The United States would take another few years to join, but the European powers were already at war in the trenches, as you can see here.
What I want to draw your attention to is what they’re wearing, specifically what they’re wearing on their heads. This is the standard issue for soldiers at the beginning of World War One, a very fetching cloth cap. It looks great. Not very effective at stopping shrapnel from ripping through flesh and bone.
It wasn’t long before these cloth caps were replaced with metal helmets; much sturdier, much more efficient at protection. This is the image we really associate with World War One; soldiers wearing metal helmets fighting in the trenches.
Now, an interesting thing happened after the introduction of these metal helmets. If you were to look at the records from the field hospitals, you would see that there was an increase in the number of patients being admitted with severe head injuries after the introduction of these metal helmets. That seems odd and the only conclusion that we could draw seems to be that cloth helmets are actually better than the metal helmets at stopping these kind of injuries. But that wouldn’t be correct.
You can see the same kind of data today. Any state where they introduce motorcycle helmet laws saying it’s mandatory to wear motorcycle helmets, you will see an increase in the number of emergency room admissions for severe head injuries for motorcyclists.
Now, in both cases, what’s missing is the complete data set because, yes, while in World War One there was an increase in the field hospital admissions for head injuries, there was a decrease in deaths. Just as today, if there’s an increase in emergency room admissions for severe head injuries because of motorcycle helmets, you will see a decrease in the number of people going to the morgue.
I kind of like these stories of analytics where there’s a little twist in the tale where the obvious solution turns out not to be the correct answer and our expectations are somewhat subverted. My favorite example of analytics on the web comes from a little company called YouTube. This is from a few years back.
Chris set about working on making a smaller version of a video page. He called this Project Feather. He worked and worked at it, and he managed to get a page down to just 98 kilobytes, so from 1.2 megabytes to 98 kilobytes. That’s an order of magnitude difference.
Then he set up shipping this to different segments of the audience and watching the analytics to see what rolled in. He was hoping to see a huge increase in the number of people engaging with the content. But here’s what he blogged.
The average aggregate page latency under Feather had actually increased. I had decreased the total page weight and number of requests to a tenth of what they were previously and somehow the numbers were showing that it was taking longer for videos to load on Feather, and this could not be possible. Digging through the numbers more (and after browser testing repeatedly), nothing made sense.
I was just about to give up on the project with my world view completely shattered when my colleague discovered the answer: geography. When we plotted the data geographically and compared it to our total numbers (broken out by region), there was a disproportionate increase in traffic from places like Southeast Asia, South America, Africa, and even remote regions of Siberia.
A further investigation revealed that, in those places, the average page load time under Feather was over two minutes. That means that a regular video page (at over a megabyte) was taking over 20 minutes to load.
Again, what was happening here was that there was a whole new set of data. There were people who literally couldn’t even load the page because it would take 20 minutes who couldn’t access YouTube who now, because of this Project Feather, for the first time were able to access YouTube. What that looked like, according to the analytics, was that page load time had overall gone up. What was missing was the full data set.
I really like these stories that kind of play with our expectations. When the reveal comes, it’s almost like hearing the punchline to a joke, right? Your expectations are set up and then subverted.
Jeff Greenspan is a comedian who talks about this. He talks about expectations in terms of music and comedy. He points out that they both deal with expectations over time.
In music, the pleasure comes from your expectations being met. A song sets up a rhythm. When that rhythm is met, that’s pleasurable. A song is using a particular scale and when those notes on that scale are hit, it’s pleasurable. Music that’s not fun to listen to tends to be arhythmic and atonal where you can’t really get a handle on what’s going to come next.
Comedy works the other way where it sets up expectations and then pulls the rug out from under you — the surprise.
Now, you can use music and you can use comedy in your designs. If you were setting up a lovely grid and a vertical rhythm, that’s like music. It’s a lovely, predictable feeling to that. But you can also introduce a bit of comedy; something that peeks out from the grid. You upset (just occasionally) something with a bit of subverted expectations.
You don’t want something that’s all music. Maybe that’s a little boring. You don’t want something that’s all comedy because then it’s just crazy and hard to get a handle on.
You can see music and comedy in how you consume news. You notice that when you read your news sources, all it does is confirm what you already believe. You read something about someone, and you think, “Yes, they’ve done something bad and I always thought they were bad, so that has confirmed my expectations.” It’s like music.
I read something that somebody has done and I always thought they were a good person. This now confirms that they are a good person. That is music to my ears. If your news feels like that, feels like music, then you may be in a bubble.
The comedy approach to music would be more like the clickbait you see at the bottom of the Internet where it’s like, “Click here. You won’t believe what these child stars look like now.” The promise there is that we will subvert your expectations, and that’s where the pleasure will come.
My favorite story from history about analytics is not from World War One but from the sequel, World War Two, where again the United States were a few years late to this world war. But when they did arrive and started their bombing raids on Germany, they were coming from England. The bombers would come back all shot up, and so there was a whole thinktank dedicated to figuring out how we can reinforce these planes in certain areas.
You can’t reinforce the whole plane. That would make it too heavy, but you could apply some judicious use of metal reinforcement to protect the plane.
They treated this as a data problem, as an analytics problem. They looked at the planes coming back. They plotted where the bullet holes were, and that led them to conclude where they should put the reinforcements. You can see here that the wings were getting all shot up, the middle of the fuselage, so clearly that’s where the reinforcements should go.
There was a statistician, a mathematician named Abraham Wald. He looked at the exact same data and he said, “No, we need to reinforce the front of the plane where there are no bullet holes. We need to reinforce the back of the fuselage where there are no bullet holes.”
What he realized was that all the data they were seeing was actually a subset of the complete data set. They were only seeing the planes that made it back. What was missing were all the planes that got shot down. If all the planes that made it back didn’t have any bullet holes in the front of the plane, then you could probably conclude that if you get a bullet hole in the front of the plane, you’re not going to make it back.
This became the canonical example of what we now call survivorship bias, which is this tendency to look at the subset of data — the winners.
You see survivorship bias all the time. You walk into a bookstore and you look at the business section and its books by successful business people; that’s survivorship bias. Really, the whole section should be ten times as big and feature ten times as many books written by people who had unsuccessful businesses because that would be a much more representative sample.
We see survivorship bias. You go onto Instagram and you look at people’s Instagram photos. Generally, they’re posting their best life, right? It’s the perfect selfie. It’s the perfect shot. It’s not a representative sample of what somebody’s life looks like. That’s survivorship bias.
We have a tendency to do it on the web, too, when people publish their design systems. Don’t get me wrong. I love the fact that companies are making their design systems public. It’s something I’ve really lobbied for. I’ve encouraged people to do this. Please, if you have a design system, make it public so we can all learn from it.
I really appreciate that people do that, but they do tend to wait until it’s perfect. They tend to wait until they’ve got the success.
What we’re missing are all the stories of what didn’t work. We’re missing the bigger picture of the things they tried that just failed completely. I feel like we could learn so much from that. I feel like we can learn as much from anti-patterns as we can from patterns, if not more so.
Robin Rendle talked about this in a blog post recently about design systems. He said:
The ugly truth is that design systems work is not easy. What works for one company does not work for another. In most cases, copying the big tech company of the week will not make a design system better at all. Instead, we have to acknowledge how difficult our work is collectively. Then we have to do something that seems impossible today—we must publicly admit to our mistakes. To learn from our community, we must be honest with one another and talk bluntly about how we’ve screwed things up.
I completely agree. I think that would be wonderful if we shared more openly. I do try to encourage people to share their stories, successes, and failures.
I organized a conference a few years back all about design systems called Patterns Day and invited the best and brightest: Alla Kholmatova, Jina Anne, Paul Lloyd, Alice Bartlett – all these wonderful people. It was wonderful to hear people come up and sort of reassure you, “Hey, none of us have got this figured out. We’re all trying to figure out what we’re doing here.” The audience really needed to hear that. They really needed to hear that reassurance that this is hard.
Gaps and overlaps
I did Patterns Day again last year. My favorite talk at Patterns Day last year, I think, was probably from Danielle Huntrods. I’m biased here because I used to work with Danielle. She used to work at Clearleft, and she’s an absolutely brilliant front-end developer.
She had this lens that she used when she was talking about design systems and other things. She talked about gaps and overlaps, which is one of those things that’s lodged in my brain. I kind of see it everywhere.
She said that when you’re categorizing things, you’re putting things into categories, that means some things will fall between those categories. That leaves you with the gaps, the things that aren’t being covered. It’s almost like Donald Rumsfeld, the unknown unknowns and all that.
What can also happen when you put things into categories is you get these overlaps where there’s duplication; two things are responsible for the same task. This duplication of effort, of course, is what we’re trying to avoid with design systems. We’re trying to be efficient. We don’t want multiple versions of the same thing. We want to be able to reuse one component. There’s a danger there.
She’s saying what we do with the design system is we concentrate on cataloging these components. We do our interface inventory, but we miss the connective part. We miss the gaps between the components. Really, what makes something a system is not so much a collection of components but how those components fit together, those gaps between them.
Danielle went further. She didn’t just talk about gaps and overlaps in terms of design systems and components. She talked about it in terms of roles and responsibilities. If you have two people who believe they’re responsible for the same thing, that’s going to lead to a clash.
Worse, you’re working on a project and you find out that there was nobody responsible for doing something. It’s a gap. Everyone assumes that the other person was responsible for getting that thing done.
“Oh, you’re not doing that?”
“I thought you were doing that.”
“Oh, I thought you were doing that.”
This is the source of so much frustration in projects, either these gaps or these overlaps in roles and responsibilities. Whenever we start a project at Clearleft, we spend quite a bit of time getting this role mapping correct, trying to make sure there aren’t any gaps and there aren’t any overlaps. Really, it’s about surfacing those assumptions.
“Oh, I assumed I was responsible for that.”
“No, no. I assumed I was the one who would be doing that.”
We clarify this stuff as early as possible in the design process. We even have a game we play called Fluffy Edges. It’s literally like a card game. We’d ask these questions, “Who is responsible for this? Who is going to do this?” It’s kind of good fun, but really it is about surfacing those assumptions and getting clarity on the roles at the beginning of the design process.
The design process
Now, the design process, I’m talking about the design process like it’s this known thing and it really isn’t. It’s a notoriously difficult thing to talk about the design process.
Here’s one way of thinking about the design process. This is The Design Squiggle by Damien Newman. He used to be at IDEO. I actually think this is a pretty accurate representation of what the design process feels like for an individual designer. You go into the beginning and it’s chaos, it’s a mess, and it’s entropy. Then, over time, you begin to get a handle on things until you get to this almost inevitable result at the end.
I’m not sure it’s an accurate representation of what the collaborative design process feels like. There’s a different diagram that resonates a lot with us at Clearleft, which is the Double Diamond diagram from Chris Vanstone at the Design Council. The way of thinking about the Double Diamond is almost like it’s two design squiggles back-to-back.
It’s a bit of an oversimplification, but the idea is that the design process is split into these triangles. First, it’s the discovery. Then we define. So we’re going out wide with discovery. Then we narrow it down with the definition. Then it’s time to build a thing and we open up wide again to figure out how we’re going to execute this thing. Once we got that figured out, we narrow down into the delivery phase.
The way of thinking about this is the first diamond (discovery and definition), that’s about building the right thing. Make sure you’re building the right thing first. The second diamond (about execution and delivery), that’s about building the thing right. Building the right thing and building the thing right.
The important thing is they follow this pattern of going wide and going narrow. This divergent phase with discovery and then convergent for definition. There’s a divergent phase for execution and then convergent for delivery.
If you take nothing else in the Double Diamond approach, it’s this way of making explicit when you’re in a divergent or convergent phase. Again, it’s kind of about servicing that assumption.
“Oh, I assumed we were converging.”
“No, no, no. We are diverging here.”
That’s super, super useful.
I’ll give you an example. If you are in a meeting, at the beginning of the meeting, state whether it’s a divergent meeting or a convergent meeting. If you were in a meeting where the idea is to generate as many ideas as possible during a meeting, make that clear at the beginning because what you don’t want is somebody in the meeting who thinks the point is to converge on a solution.
You’ve got these people generating ideas and then there’s one person going, “No, that will never work. Here’s why. Oh, that’s technically impossible. Here’s why.” No, if you make it clear at the start, “There are no bad ideas. We’re in a divergent meeting,” everyone is on the same page.
Conversely, if it’s a convergent meeting, you need to make that clear and say, “The point of this meeting is that we come to a decision, one decision,” and you need to make that clear because what you don’t want in a convergent meeting is it’s ten minutes to launch time, converging on something, and then somebody in the meeting goes, “Hey, I just had an idea. How about if we…?” You don’t want that. You don’t want that.
If you take nothing else from this, this idea of making divergence and convergence explicit is really, really, really useful. Again, like I say, this pattern of just assumptions being surfaced is so useful.
This initial diamond of the Double Diamond phase, it’s where we spend a lot of our time at Clearleft. I think, early in the years of Clearleft, we spent more time on the second diamond. We were more about execution and delivery. Now, I feel like we deliver a lot more value in the discovery and definition phase of the design process.
There’s so much we do in this initial discovery phase. I mentioned already we have this fluffy edges game we play for role mapping to figure out the roles and responsibilities. We have things like a project canvas we use to collaborate with the clients to figure out the shape of what’s to come.
We sometimes run an exercise called a pre-mortem. I don’t know if you’ve ever done that. It’s like a post-mortem except you do it at the beginning of the project. It’s kind of a scenario planning.
You say, “Okay, it’s so many months after the launch and it’s been a complete disaster. What went wrong?” You map that out. You talk about it. Then once you’ve got that mapped out, you can then take steps to avoid that disaster happening.
Of course, what we do in the discovery phase, almost more than anything else, is research. You can’t go any further without doing the research.
All of these things, all of these exercises, these ways of working are about dealing with assumptions, either surfacing assumptions that we didn’t know were there or turning assumptions into hypotheses that can be tested. If you think about what an assumption is, it kind of goes back to expectations that I was talking about.
Assumptions are expectations plus internal biases. That gives you an assumption. The things that you don’t even realize you believe; they lead to assumptions. This can obviously be very bad. This is like you’ve got blind spots in your assumptions because of your own biases that you didn’t even realize you had.
They’re not necessarily bad things. Assumptions aren’t necessarily bad. If you think about your expectations plus your biases, that’s another way of thinking about your values. What do you hold to be really dear to you? The things that are self-evident to you, those are your values, your internal expectations and biases.
Now, at Clearleft, we have our company values, our core values, the things we believe. I am not going to share the Clearleft values with you. There are two reasons for that.
One is that they’re Clearleft’s values. They are useful for us. That’s for us to know internally.
Secondly, there’s nothing more boring than a company sharing their values with you. I say nothing more boring. Maybe the only thing more boring than a company sharing their values is when a so-called friend tells you about a dream they had and you have to sit there and smile and nod politely while they tell you about something that is only of interest to them.
These values are essentially what give you purpose, whether it’s at an individual level, your personal moral values give you your purpose, or at a company or organization level, you get your purpose – or any endeavor. You think about the founding of a nation-state like the United States of America. You got the Declaration of Independence. That encodes the values. That has the purpose. It’s literally saying, “We hold these truths to be self-evident.” These are assumptions here. That’s your purpose is something like the Declaration of Independence.
Then you get the principles, how you’re going to act. The Constitution would be an example of a collection of principles. These principles must be influenced by the purpose. Your values must influence the principles you’re going to use to act in the world.
Then those principles have an effect on the final patterns, the outputs that you’ll see. In the case of a nation-state like America, I would say the patterns are the laws that you end up with. Those laws come from the principles encoded in the Constitution. The Constitution, those principles in the Constitution are influenced and encoded from the purpose in the Declaration of Independence.
The purpose influences the principles. The principles influence the pattern. This would be true in the case of software as well. You think about the patterns are the final interface elements, the user interface. Those are the patterns. Those have been influenced by the principles of that company, how they choose to act, and those principles are influenced by the purpose of that company and what they believe.
This is why I find principles, in particular, to be fascinating because they sit in the middle. They are influenced by the purpose and they, in turn, influence the patterns. I’m talking about design principles, something I’m really into. I’m so into design principles, I actually have a website dedicated to design principles at principles.adactio.com.
Now, all I do on this website is collect design principles. I don’t pass judgment. I don’t say whether I think they’re good design principles or bad design principles. I just document them. That’s turned out to be a good thing to do over time because sometimes design principles disappear, go away, or get changed. I’ve got a record of design principles from the past.
For example, Google used to have a set of principles called Ten Things We Know to Be True — we know to be true, right? We hold these truths to be self-evident. That’s no longer available on the Google website, those ten things, those ten principles. One of them was, “You can make money without doing evil.” Like I said, that’s gone now. That’s not available on the Google website.
There was another set of design principles from Google that’s also not available anymore. That was called Ten Principles That Contribute to a Googley User Experience. I think we understand why those are no longer available. The sheer embarrassment of saying the word Googley out loud, I think.
I’ll tell you something I notice when I see design principles. Like I say, I catalog them without judgment, but I do have ideas. I think about what makes for good or bad design principles or sets of design principles.
Whenever I see somebody with a list that’s exactly ten principles, I’m suspicious. Like, “Really? That’s such a convenient round number. You didn’t have nine principles that contribute to a Googley user experience? You didn’t have 11 things that we know to be true? It happened to be exactly ten?” It feels almost like a bad code smell to me that it’s exactly ten principles.
Even some great design principles like Dieter Rams, the brilliant designer. He has a fantastic set of design principles called Ten Principles for Good Design. But even there I have to think, “Hmm. That’s a bit convenient, isn’t it, that it’s exactly ten principles for good design? Isn’t it, Dieter?”
Now, just in case you think I’m being blasphemous by sugging that Dieter Rams’ Ten Principles for Good Design is not a good set of design principles, I am not being blasphemous. I would be blasphemous if I pointed out that in the Old Testament, God supposedly delivers 10 commandments, not 9, not 11, exactly 10 commandments. Really, Moses, ten?
Anyway, what I’m talking about here is, like I say, almost like these code smells for design principles. Can we evaluate design principles? Are there heuristics for saying whether a design principle is a good design principle or a bad design principle?
To get meta about this, what I’m talking about is, are there design principles for design principles? I kind of think there are. I think you can evaluate design principles and say that’s a good one or that’s a bad one. You can evaluate them by how useful they are.
Let’s take an example. Let’s say you’ve got a design principle like this:
Make it usable.
That’s a design principle. I think this is a bad design principle. It’s not because I don’t agree with it. It’s actually a bad design principle because I agree with it and everyone agrees with it. It’s so agreeable that it’s hard to argue with and that’s not what a design principle is for.
Design principles aren’t these things to go, “Rah-rah! Yes! I feel good about this.” They are there to kind of surface stuff and have discussions, have disagreements – get it out in the open.
Let’s say we took this design principle, “make it usable”, and it was rephrased to something more contentious. Let’s say somebody had the design principle like:
Usability is more important than profitability.
Ooh! Now we’re talking.
See, I think this is a good design principle. I’m not saying I agree with it. I’m saying it’s a good design principle because what it has now is priority.
We’re saying something is valued more than something else and that’s what you want from design principles is to figure out what the priorities of this organization are. What do they value? How are they going to behave?
I think this is a great phrasing for design principles. If you can phrase a design principle like this:
___, even over ___
Then that’s really going to make it clear what your values are. You can phrase a design principle as:
Usability, even over profitability.
Now you can have that discussion early on about whether everyone is on board with that. If there’s disagreement, you need to hammer that out and figure it out early on in the process.
Here’s another thing about this phrasing that I really like, “blank, even over blank.” It passes another test of a good design principle, which is reversibility. Rather than being a universal thing, a design principle should be reversible for a different organization.
One organization might have a design principle that says “usability, even over profitability,” and another organization, you can equally imagine having a design principle that says, “profitability, even over usability.” The fact that this principle is reversible like that is a good thing. That shows that it’s an effective design principle because it’s about priorities.
In case of conflict, consider users over authors over implementors over theoretical purity.
That’s so good.
First of all, it just starts with, “In case of conflict.” Yes! That is exactly what design principles are for. Again, they’re not there to be like, “Rah-rah! Feel-good design principles.” No, they are there to sort out conflict.
Then, “consider users over authors.” That’s like:
Users, even over authors. Authors, even over implementors. Implementors even over theoretical purity.
Really good stuff.
There are, I think, design principles for design principles, these kind of smell tests that you can run your design principles past and see if they pass or fail.
I talked about how design principles are unique to the organization. The reversibility test kind of helps with that. You can imagine a different organization that has the complete opposite design principles to you.
I do wonder: are there some design principles that are truly universal? Well, there’s kind of a whole category of principles that we treat as universal truths. That’s kind of these laws. They tend to be the eponymous laws. They’re usually named after a person and there’s some kind of universal truth. There are a lot of them out there.
Hofstadter’s law, that’s from Douglas Hofstadter. Hofstadter’s law states:
It always takes longer than you expect, even when you take into account Hofstadter’s law.
That does sound like a universal truth and certainly, my experience matches that. Yeah, I would say Hofstadter’s law feels like a universal design principle.
90% of everything is crap.
Theodore Sturgeon was a science fiction writer and people would poo-poo science fiction and point out that it was crap. He would say, “Yeah, but 90% of science fiction is crap because 90% of everything is crap.” That became Sturgeon’s law.
Yeah, you look at movies, books, and music. It’s hard to argue with Sturgeon’s law. Yeah, 90% of everything is crap. That feels like a universal law.
Here’s one we’ve probably all heard of. Murphy’s law:
Anything that can go wrong will go wrong.
It tends to get treated as this funny thing but, actually, it’s a genuinely useful design principle and one we could use on the web a lot more.
There’s Cole and Cole’s law. You’ve probably heard of that. That’s:
Shredded raw cabbage with a vinaigrette or mayonnaise dressing.
Moving swiftly on, there’s another sort of category of these laws, these universal principles that have a different phrasing, and it’s this idea of a razor. Here it’s being explicit about in case of conflict. Here it’s being explicit saying when you try to choose between two choices, which to choose.
Hanlon’s razor is a famous example that states:
Never attribute to malice that which can be adequately explained by incompetence.
If you’re trying to find a reason for something, don’t go straight to assuming malice. Incompetence tends to be a greater force in the world than malice.
I think it’s generally true, although, there’s also a law by Arthur C. Clarke, Clarke’s third law, which states that, “Any sufficiently advanced technology is indistinguishable from magic.” If you take Clarke’s third law and you mash it up with Hanlon’s razor, then the result is that any sufficiently advanced incompetence is indistinguishable from malice.
Another razor that we hear about a lot is Occam’s razor. This is very old. It goes back to William of Occam. Sometimes it’s misrepresented as being the most obvious solution is the correct solution. We know that that’s not true because we saw in the stories of metal helmets in World War One and motorcycle helmets or the bombers in World War Two or the YouTube videos that it’s not about the most obvious solution.
What Occam’s razor actually states is:
Entities should not be multiplied without necessity.
In other words, if you’re coming up with an explanation for something and your explanation requires that you now have to explain even more things—you’re multiplying the things that need to be explained—it’s probably not the true thing.
If your explanation for something is “aliens did it,” well, now you’ve got to explain the existence of aliens and explain how they got here and all this. You’re multiplying the entities. Most conspiracy theories fail the test of Occam’s razor because they unnecessarily multiply entities.
World Wide Web
These design principles that we can borrow, we’ve got these universal ones we can borrow. I also think maybe we can borrow from specific projects and see things that would apply to us. Certainly, when we’re working on the World Wide Web and we’re building things on the World Wide Web, we could look at the design principles that informed the World Wide Web when it was being built by Tim Berners-Lee, who created the World Wide Web, and Robert Cailliau, who worked with him.
The World Wide Web started at CERN and started life in 1989 as just a proposal. Tim Berners-Lee wrote this really quite boring memo called “Information Management: A Proposal” with indecipherable diagrams on it. This is March 12, 1989. His supervisor Mike Sendall, he later saw this proposal and must have seen the possibility here because he scrawled across the top:
Vague but exciting.
Tim Berners-Lee did get the go-ahead to work on this project, this World Wide Web project, and he created the first web browser. He created the first web server. He created HTML.
You can see the world’s first web server in the Science Museum in London. It’s this NeXTcube. NeXT was the company that Steve Jobs formed after leaving Apple.
I have a real soft spot for this machine because I was very lucky to be invited to CERN last year to take part in this project where we were trying to recreate the experience of using that first web browser that Tim Berners-Lee created on that NeXT machine. You can go to this website worldwideweb.cern.ch and you can see what it feels like to use this web browser. You can use a modern browser with this emulation inside of it. It’s really good fun.
My colleagues were spending their time actually doing the hard work. I spent most of my time working on the website about the project. I built this timeline because I was fascinated about what was influencing Tim Berners-Lee.
It’s kind of easy to look at the 30 years of the web, but I thought it would be more interesting to also look back at the 30 years before the web and see what influenced Tim Berners-Lee when it came to networks, hypertext, and format. Were there design principles that he adhered to?
We don’t have to look far because Tim Berners-Lee himself has published design principles (that he formulated or borrowed from elsewhere) in a document called Axioms of web Architecture. I think he first published this in 1998. These are really useful things that we can take and we can apply when we’re building on the web.
Particularly, now I’m talking about the second diamond of the Double Diamond. When we are choosing how we’re going to execute something or how we’re going to deliver it, building the thing right, that’s when these design principles come in handy.
He was borrowing; Tim Berners-Lee was borrowing from things that had come before, existing creations that the web is built on top of like the Internet and computing. He said:
Principles such as simplicity and modularity are the stuff of software engineering.
So he borrowed those principles about simplicity and modularity.
He also said:
Decentralization and tolerance are the life and breath of the Internet.
Those principles, tolerance and decentralization, they’d proven themselves to work on the Internet. The web is built on top of the Internet. So, it makes sense to carry those principles forward on the World Wide Web.
That principle of tolerance, in particular, is something I think you really see on the web. It comes from the principles underlying the Internet. In particular, this person, Jon Postel, who is responsible for maintaining the Domain Name System, DNS, he has an eponymous law named after him. It’s also called the Robustness Principle or Postel’s law. This law states:
Be conservative in what you send. Be liberal in what you accept.”
Now, he was talking about packet switching on the Internet that if you’re going to send a packet over the Internet, try to make it as well-formed as possible. But on the other hand, when you receive a packet and if it’s got errors or something, try and deal with it. Be liberal in what you can accept.
I see this at work all the time on the web, not just in terms of technical things but in terms of UX and usability. The example I always use is if you’re going to make a form on the web, be conservative in what you send. Send as few form fields as possible down the wire to the end-user. But then when the user is filling out that form, be liberal in what you accept. Don’t make them formulate their telephone number or credit card in a certain format. Be liberal in what you accept.
Be conservative in what you send when it comes from front-end development. This matters. Literally, just in terms of what we’re sending down the wire to the end-user, we should be more conservative in what we send. We don’t think about this enough, just the weight, the sheer weight of things we’re sending.
I was doing some consulting with a client and we did a kind of top four of where the weight was coming from. I think this applies to websites in general.
4: Web fonts
Coming in at number four, we had web fonts. They can get quite weighty, but we have ways of dealing with this now. We’ve got font display in CSS. We can subset our web fonts. Variable fonts can be a way of reducing the size of fonts. So, there are solutions to this. There are ways of handling it.
At number three, images. Images do account for a lot of the sheer weight of the web. But again, we have solutions here. We’ve got responsive images with source set and picture. Using the right format, right? Not using a PNG if you should be using a JPEG, using WebP, using SVGs where possible. We can deal with this. There are solutions out there, as long as we’re aware of it.
We’re seeing that now.
When it comes to choosing a language, there’s a fantastic design principle that Tim Berners-Lee used when he was designing the World Wide Web. It’s the principle of least power. The principle of least power states:
Choose the least powerful language suitable for a given purpose.
That sounds very counterintuitive. Why would you want to choose the least powerful language? Well, in a way, it’s about keeping things simple. There’s another design principle, “Keep it simple, stupid.” KISS.
It’s kind of related to Occam’s razor, not multiplying entities unnecessarily. Choose the simplest language. The simplest language is likely to be more universal and, because it’s simpler, it might not be as powerful but it’ll generally be more robust.
I’ll give you an example. I’ll quote from Derek Featherstone. He said:
He’s absolutely right. This is about robustness here. It’s less fragile.
There’s a set of design principles from the Government Digital Services here in the U.K. and they’re really good design principles. One of them stuck out to me. The design principle itself says:
By way of explanation, they say:
Government should only do what only government can do.
Government shouldn’t try to be all things to all people. Government should do the things where private enterprise can’t do these things. The government has to do these things. The government should only do what only government can do.
I thought that this could be extrapolated out and made into a more universal design principle. You could say:
Any particular technology should only do what only that particular technology can do
If that’s too abstract, let’s formulate it into this design principle:
Or, alternatively, you could use a button and you style it however you want using CSS.
Okay. That seems pretty straightforward and that is a perfect example of the principle of least power. Choose the least powerful language suitable for the purpose.
But then what if you’ve got a drop-down component, selecting an option from a list of options? Well, you could build this using bare minimum HTML. Again, divs, maybe. You style it however you want it to look and you give it that opening and closing functionality. You give it accessibility using ARIA. Now you’ve got to think about making sure it works with a keyboard — all that stuff, all the edge cases.
Or you just use a select element — job done. You style it with CSS …Ah, well, yes, you can style it to a certain degree with CSS, but if you ever try to style the open state of a select element, you’re going to have a hard time.
Now, this is where it gets interesting. What do you care about more? Can you live with that open state not being styled exactly the way you might want it to be styled? If so, yes, choose the least powerful technology. Go with select. But I can kind of start to see why somebody would maybe roll their own in that case.
Do you still pick the least powerful technology here?
This would be kind of the under-engineered approach: to just use the native HTML approach: input type="date", select, button.
What you get with the native approach is you get access. You get that universality by using the least powerful language. There’s more universal support.
What you get by rolling your own is you get much more control. You’re going from the spectrum of least power to most power and that’s also a spectrum going from most available (widest access) to least available but with more control.
You have to decide where your priorities lie. This is where I think, again, we can look at the web and we can take principles from the web.
The web does not value consistency. The web values ubiquity.
That’s the purpose of the web. It’s the universal access. That’s the value encoded into it.
To put this in another way, we could formulate it as:
Ubiquity, even over consistency.
That’s the design principle of the web.
This passes the reversibility test. We can picture other projects that would say:
Consistency, even over ubiquity.
Native apps value consistency, even over ubiquity. iOS apps are very consistent on iOS devices, but just don’t work at all on Android devices. They’re consistent; they’re not ubiquitous.
We saw this in action with Flash and the web. Flash valued consistency, but you had to have the Flash plugin installed, so it was not ubiquitous. It was not universal.
The World Wide Web is about ubiquity, even over consistency. I think we should remember that.
When we look here in the world’s first-ever web browser, we are looking at the world’s first-ever webpage, which is still available at its original URL. That’s incredibly robust.
What’s amazing is you can not only look at the world’s first webpage in the world’s first web browser, you can look at the world’s first webpage in a modern web browser and it still works, which is kind of amazing. If you took a word processing document from 30 years ago and tried to open it in a modern word processing document, good luck. It just doesn’t work that way. But the web values this ubiquity over consistency.
Let’s apply those principles, apply the principle of least power, apply the robustness principle. Value ubiquity even over consistency. Value universal access over control. That way, you can make products and services that aren’t just on the web, but of the web.