I think about design principles a lot. I’m such a nerd for design principles, I even have a collection. I’m not saying all of the design principles in the collection are good—far from it! I collect them without judgement.
As for what makes a good design principle, I’ve written about that before. One aspect that everyone seems to agree on is that a design principle shouldn’t be an obvious truism. Take this as an example:
Make it usable.
Who’s going to disagree with that? It’s so agreeable that it’s practically worthless as a design principle. But now take this statement:
Usability is more important than profitability.
Ooh, now we’re talking! That’s controversial. That’s bound to surface some disagreement, which is a good thing. It’s now passing the reversability test—it’s not hard to imagine an endeavour driven by the opposite:
Profitability is more important than usability.
In either formulation, what makes these statements better than the bland toothless agreeable statements—“Usability is good!”, “Profitability is good!”—is that they introduce the element of prioritisation.
I like design principles that can be formulated as:
X, even over Y.
It’s not saying that Y is unimportant, just that X is more important:
Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.
Mind you, some technologies have no direct effect on the end user. When it comes to build tools, version control, toolchains …all the stuff that sits on your computer and never directly interacts with users. In that situation, the wants and needs of developers can absolutely take priority.
But as a general principle, I think this works:
User experience, even over developer experience.
Sadly, I think the current state of “modern” web development reverses that principle. Developer efficiency is prized above all else. Like I said, that would be absolutely fine if we’re talking about technologies that only developers are exposed to, but as soon as we’re talking about shipping those technologies over the network to end users, it’s negligent to continue to prioritise the developer experience.
I feel like personal websites are an exception here. What you do on your own website is completely up to you. But once you’re taking a paycheck to make websites that will be used by other people, it’s incumbent on you to realise that it’s not about you.
I’ve been talking about developers here, but this is something that applies just as much to designers. But I feel like designers go through that priority shift fairly early in their career. At the outset, they’re eager to make their mark and prove themselves. As they grow and realise that it’s not about them, they understand that the most appropriate solution for the user is what matters, even if that’s a “boring” tried-and-tested pattern that isn’t going to wow any fellow designers.
I’d like to think that developers would follow a similar progression, and I’m sure that some do. But I’ve seen many senior developers who have grown more enamoured with technologies instead of honing in on the most appropriate technology for end users. Maybe that’s because in many organisations, developers are positioned further away from the end users (whereas designers are ideally being confronted with their creations being used by actual people). If a lead developer is focused on the productivity, efficiency, and happiness of the dev team, it’s no wonder that their priorities end up overtaking the user experience.
I realise I’m talking in very binary terms here: developer experience versus user experience. I know it’s not always that simple. Other priorities also come into play, like business needs. Sometimes business needs are in direct conflict with user needs. If an online business makes its money through invasive tracking and surveillance, then there’s no point in having a design principle that claims to prioritise user needs above all else. That would be a hollow claim, and the design principle would become worthless.
Because that’s the point with design principles. They’re there to be used. They’re not a nice fluffy exercise in feeling good about your work. The priority of constituencies begins, “in case of conflict” and that’s exactly when a design principle matters—when it’s tested.
Suppose someone with a lot of clout in your organisation makes a decision, but that decision conflicts with your organisations’s design principles. Instead of having an opinion-based argument about who’s right or wrong, the previously agreed-upon design principles allow you to take ego out of the equation.
Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.
“Serverless”, is a buzzword. We can’t seem to agree on what it actaully means, so it ends up meaning nothing at all. Much like “cloud” or “dynamic” or “synergy”. You just wait for the right time in a meeting to drop it, walk to the board and draw a Venn Diagram, and then just sit back and wait for your well-deserved promotion.
That’s very true, and I do not like the term “serverless” for the rather obvious reason that it’s all about servers (someone else’s servers, that is). But these three principles are handy for figuring out if you’re building with in a serverlessy kind of way:
You have no knowledge of the underlying system where your code runs.
Scaling is an intrinsic attribute of the technology; so much so that it just happens automatically.
Don’t build more JS than you can maintain over the long term. If you’re going to be building something for a long time, make sure what you are building will grow with you. Make sure you don’t depend on other people’s work too much, lest you want to keep refactoring your code when the framework you picked goes out of style.
I like those. They’re like design principles for design principles.
One set of design principles that I’ve included in my collection is from gov.uk: government design principles . I think they’re very well thought-through (although I’m always suspicious when I see a nice even number like 10 for the amount of items in the list). There’s a great line in design principle number two—Do less:
Government should only do what only government can do.
This wasn’t a theoretical issue. The multiple departmental websites that preceded gov.uk were notorious for having too much irrelevant content—content that was readily available elsewhere. It was downright wasteful to duplicate that content on a government site. It wasn’t appropriate.
I think that the design principle from GDS could be abstracted into a general technology principle:
Any particular technology should only do what only that particular technology can do.
Vasilis has published his magnificent thesis online. It’s quite lovely:
You can read this thesis in a logical order, which is the way that I wrote it. It starts with a few articles that explore the context of my research. It then continues with four chapters in which I describe the things I did. I end the thesis with four posts with findings, conclusions and recommendations.
And they all have.
And they are all different.
Read this talk transcript, and even if you don’t agree with everything in it today, you may end up coming back to it in the future. He’s playing the long game:
The web is the way now that we distribute information. We will need the web pages we create now to be readable in 100 years time, just as we can still read 100-year-old books.
I’ve come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.
Taking the idea of the Clock of the Long Now and applying it to a twitterbot:
Software may not be as well suited as a finely engineered clock to operate on these sorts of geological scales, but that doesn’t mean we can’t try to put some of the 10,000 year clock’s design principles to work.
The bot will almost certainly fall foul of Twitter’s API changes long before the next tweet-chime is due, but it’s still fascinating to see the clock’s principles applied to software: longevity, maintainability, transparency, evolvability, and scalability.
Software tends to stay in operation longer than we think it will when we first wrote it, and the wearing effects of entropy within it and its ecosystem often take their toll more quickly and more destructively than we could imagine. You don’t need to be thinking on a scale of 10,000 years to make applying these principles a good idea.
Be conservative in what you send, be liberal in what you accept.
Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:
Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.
In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:
It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.
In other words, be liberal in what you accept from users.
I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:
Choose the least powerful language suitable for a given purpose.
On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?
Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.
In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.
Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.
It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.
At least, they don’t physically exist. They are intangible.
They’re in good company.
Feelings are intangible, but real. Hope. Despair.
Ideas are intangible: liberty, justice, socialism, capitalism.
The economy. Currency. All intangible. I’m sure we’ve all had those “college thoughts”:
Money isn’t real, man! They’re just bits of metal and pieces of paper ! Wake up, sheeple!
Nations are intangible. Geographically, France is a tangible, physical place. But France, the Republic, is an idea. Geographically, North America is a real, tangible, physical land mass. But ideas like “Canada” and “The United States” only exist in our minds.
Faith—the feeling—is intangible.
God—the idea—is intangible.
Art—the concept—is intangible.
A piece of art is an insantiation of the intangible concept of what art is.
Incidentally, I quite like Brian Eno’s working definition of what art is. Art is anything we don’t have to do. We don’t have to make paintings, or sculptures, or films, or music. We have to clothe ourselves for practical reasons, but we don’t have to make clothes beautiful. We have to prepare food to eat it, but don’t have to make it a joyous event.
By this definition, sports are also art. We don’t have to play football. Sports are also intangible.
A game of football is an instantiation of the intangible idea of what football is.
Football, chess, rugby, quiditch and rollerball are equally (in)tangible.
But football, chess and rugby have more consensus.
(Christianity, Islam, Judaism, and The Force are equally intangible, but Christianity, Islam, and Judaism have a bit more consensus than The Force).
HTML is intangible.
A web page is an instantiation of the intangible idea of what HTML is.
But we can document our shared consensus.
A rule book for football is like a web standard specification. A documentation of consensus.
By the way, economics, religions, sports and laws are all examples of intangibles that can’t be proven, because they all rely on their own internal logic—there is no outside data that can prove football or Hinduism or capitalism to be “true”. That’s very different to ideas like gravity, evolution, relativity, or germ theory—they are all intangible but provable. They are discovered, rather than created. They are part of objective reality.
Consensus reality is the collection of intangibles that we collectively agree to be true: economy, religion, law, web standards.
We treat consensus reality much the same as we treat objective reality: in our minds, football, capitalism, and Christianity are just as real as buildings, trees, and stars.
Sometimes consensus reality and objective reality get into fights.
Some people have tried to make a consensus reality around the accuracy of astrology or the efficacy of homeopathy, or ideas like the Earth being flat, 9-11 being an inside job, the moon landings being faked, the holocaust never having happened, or vaccines causing autism. These people are unfazed by objective reality, which disproves each one of these ideas.
For a long time, the consensus reality was that the sun revolved around the Earth. Copernicus and Galileo demonstrated that the objective reality was that the Earth (and all the other planets in our solar system) revolve around the sun. After the dust settled on that particular punch-up, we switched up our consensus reality. We changed the story.
That’s another way of thinking about consensus reality: our currencies, our religions, our sports and our laws are stories that we collectively choose to believe.
Web standards are a collection of intangibles that we collectively agree to be true. They’re our stories. They’re our collective consensus reality. They are what web browsers agree to implement, and what we agree to use.
The web is agreement.
For human beings to collaborate together, they need a shared purpose. They must have a shared consensus reality—a shared story.
Once a group of people share a purpose, they can work together to establish principles.
Design principles are points of agreement. There are design principles underlying every human endeavour. Sometimes they are tacit. Sometimes they are written down.
Patterns emerge from principles.
Here’s an example of a human endeavour: the creation of a nation state, like the United States of America.
The purpose is agreed in the declaration of independence.
The principles are documented in the constitution.
The patterns emerge in the form of laws.
Here’s one of the design principles behind HTML5. It’s my personal favourite—the priority of constituencies:
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.
“In case of conflict”—that’s exactly what a good design principle does! It establishes the boundaries of agreement. If you disagree with the design principles of a project, there probably isn’t much point contributing to that project.
Also, it’s reversible. You could imagine a different project that favoured theoretical purity above all else. In fact, that’s pretty much what XHTML 2 was all about.
XHTML 1 was simply HTML reformulated with the syntax of XML: lowercase tags, lowercase attributes, always quoting attribute values.
Remember HTML doesn’t care whether tags and attributes are uppercase or lowercase, or whether you put quotes around your attribute values. You can even leave out some closing tags.
So XHTML 1 was actually kind of a nice bit of agreement: professional web developers agreed on using lowercase tags and attributes, and we agreed to quote our attributes. Browsers didn’t care one way or the other.
But XHTML 2 was going to take the error-handling model of XML and apply it to HTML. This is the error handling model of XML: if the parser encounters a single error, don’t render the document.
Of course nobody agreed to this. Browsers didn’t agree to implement XHTML 2. Developers didn’t agree to use it. It ceased to exist.
It turns out that creating a format is relatively straightforward. But how do you turn something into a standard? The really hard part is getting agreement.
Sturgeon’s Law states:
90% of everything is crap.
Coincidentally, 90% is also the percentage of the world’s crap that gets transported by ocean. Your clothes, your food, your furniture, your electronics …chances are that at some point they were transported within an intermodal container.
These shipping containers are probably the most visible—and certainly one of the most important—standards in the physical world. Before the use of intermodal containers, loading and unloading cargo from ships was a long, laborious, and dangerous task.
Along came Malcom McLean who realised that the whole process could be made an order of magnitude more efficient if the cargo were stored in containers that could be moved from ship to truck to train.
But he wasn’t the only one. The movement towards containerisation was already happening independently around the world. But everyone was using different sized containers with different kinds of fittings. If this continued, the result would be a tower of Babel instead of smoothly running global logistics.
Malcolm McLean and his engineer Keith Tantlinger designed two crate sizes—20ft and 40ft—that would work for ships, trucks, and trains. Their design also incorporated an ingenious twistlock mechanism to secure containers together. But the extra step that would ensure that their design would win out was this: Tantlinger convinced McLean to give up the patent rights.
This wasn’t done out of any hippy-dippy ideology. These were hard-nosed businessmen. But they understood that a rising tide raises all boats, and they wanted all boats to be carrying the same kind of containers.
Without the threat of a patent lurking beneath the surface, ready to torpedo the potential benefits, the intermodal container went on to change the world economy. (The world economy is very large and intangible.)
The World Wide Web also ended up changing the world economy, and much more besides. And like the intermodal container, the World Wide Web is patent-free.
Again, this was a pragmatic choice to help foster adoption. When Tim Berners-Lee and his colleague Robert Cailleau were trying to get people to use their World Wide Web project they faced some stiff competition. Lots of people were already using Gopher. Anyone remember Gopher?
The seemingly unstoppable growth of the Gopher protocol was somewhat hobbled in the early ’90s when the University of Minnesota announced that it was going to start charging fees for using it. This was a cautionary lesson for Berners-Lee and Cailleau. They wanted to make sure that CERN didn’t make the same mistake.
On April 30th, 1993, the code for the World Wide Project was made freely available.
This is for everyone.
If you’re trying to get people to adopt a standard or use a new hypertext system, the biggest obstacle you’re going to face is inertia. As the brilliant computer scientist Grace Hopper used to say:
The most dangerous phrase in the English language is “We’ve always done it this way.”
Rear Admiral Grace Hopper waged war on business as usual. She was well aware how abritrary business as usual is. Business as usual is simply the current state of our consensus reality. She said:
Humans are allergic to change.
I try to fight that.
That’s why I have a clock on my wall that runs counter‐clockwise.
Our clocks are a perfect example of a ubiquitous but arbitrary convention. Why should clocks run clockwise rather than counter-clockwise?
One neat explanation is that clocks are mimicing the movement of a shadow across the face of a sundial …in the Northern hemisphere. Had clocks been invented in the Southern hemisphere, they would indeed run counter-clockwise.
But on the clock face itself, why do we carve up time into 24 hours? Why are there 60 minutes in an hour? Why are there are 60 seconds in a minute?
It probably all goes back to Babylonian accountants. Early cuneiform tablets show that they used a sexagecimal system for counting—that’s because 60 is the lowest number that can be divided evenly by 6, 5, 4, 3, 2, and 1.
But we don’t count in base 60; we count in base 10. That in itself is arbitrary—we just happen to have a total of ten digits on our hands.
So if the sexagesimal system of telling time is an accident of accounting, and base ten is more widespread, why don’t we switch to a decimal timekeeping system?
It has been tried. The French revolution introduced not just a new decimal calendar—much neater than our base 12 calendar—but also decimal time. Each day had ten hours. Each hour had 100 minutes. Each minute had 100 seconds. So much better!
It didn’t take. Humans are allergic to change. Sexagesimal time may be arbitrary and messy but …we’ve always done it this way.
Incidentally, this is also why I’m not holding my breath in anticipation of the USA ever switching to the metric system.
Instead of trying to completely change people’s behaviour, you’re likely to have more success by incrementally and subtly altering what people are used to.
That was certainly the case with the World Wide Web.
The Hypertext Transfer Protocol sits on top of the existing TCP/IP stack.
The key building block of the web is the URL. But instead of creating an entirely new addressing scheme, the web uses the existing Domain Name System.
Then there’s the lingua franca of the World Wide Web. These elements probably look familiar to you:
You recognise this language, right? That’s right—it’s SGML. Standard Generalised Markup Language.
Specifically, it’s CERN SGML—a flavour of SGML that was already popular at CERN when Tim Berners-Lee was working on the World Wide Project. He used this vocabulary as the basis for the HyperText Markup Language.
Because this vocabulary was already familiar to people at CERN, convincing them to use HTML wasn’t too much of a hard sell. They could take an existing SGML document, change the file extension to .htm and it would work in one of those new fangled web browsers.
In fact, HTML worked better than expected. The initial idea was that HTML pages would be little more than indices that pointed to other files containing the real meat and potatoes of content—spreadsheets, word processing documents, whatever. But to everyone’s surprise, people started writing and publishing content in HTML.
Was HTML the best format? Far from it. But it was just good enough and easy enough to get the job done.
It has since changed, but that change has happened according to another design principle:
Evolution, not revolution
From its humble beginnings with the handful of elements borrowed from CERN SGML, HTML has grown to encompass an additional 100 elements over its lifespan. And yet, it’s still technically the same format!
This is a classic example of the paradox called the Ship Of Theseus, also known as Trigger’s Broom.
You can take an HTML document written over two decades ago, and open it in a browser today.
Even more astonishing, you can take an HTML document written today and open it in a browser from two decades ago. That’s because the error-handling model of HTML has always been to simply ignore any tags it doesn’t recognise and render the content inside them.
That pattern of behaviour is a direct result of the design principle:
…document conformance requirements should be designed so that Web content can degrade gracefully in older or less capable user agents, even when making use of new elements, attributes, APIs and content models.
Here’s a picture from 2006.
That’s me in the cowboy hat—the picture was taken in Austin, Texas. This is an impromptu gathering of people involved in the microformats community.
Microformats, like any other standards, are sets of agreements. In this case, they’re agreements on which class values to use to mark up some of the missing elements from HTML—people, places, and events. That’s pretty much it.
And yes, they do have design principles—some very good ones—but that’s not why I’m showing this picture.
Some of the people in this picture—Tantek Çelik, Ryan King, and Chris Messina—were involved in the creation of BarCamp, a series of grassroots geek gatherings.
BarCamps sound like they shouldn’t work, but they do. The schedule for the event is arrived at collectively at the beginning of the gathering. It’s kind of amazing how the agreement emerges—rough consensus and running events.
In the run-up to a BarCamp in 2007, Chris Messina posted this message to the fledgeling social networking site, twitter.com:
how do you feel about using # (pound) for groups. As in #barcamp [msg]?
This was when tagging was all the rage. We were all about folksonomies back then. Chris proposed that we would call this a “hashtag”.
I wasn’t a fan:
Thinking that hashtags disrupt the reading flow of natural language. Sorry @factoryjoe
But it didn’t matter what I thought. People agreed to this convention, and after a while Twitter began turning the hashtagged words into links.
In doing so, they were following another HTML design principle:
Pave the cowpaths
It sounds like advice for agrarian architects, but its meaning is clarified:
When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.
Twitter had previously paved a cowpath when people started prefacing usernames with the @ symbol. That convention didn’t come from Twitter, but they didn’t try to stop it. They rolled with it, and turned any username prefaced with an @ symbol into a link.
The @ symbol made sense because people were used to using it from email. The choice to use that symbol in email addresses was made by Ray Tomlinson. He needed a symbol to separate the person and the domain, looked down at his keyboard, saw the @ symbol, and thought “that’ll do.”
Perhaps Chris followed a similar process when he proposed the symbol for the hashtag.
It could have just as easily been called a “number tag” or “octothorpe tag” or “pound tag”.
This symbol started life as a shortcut for “pound”, or more specifically “libra pondo”, meaning a pound in weight. Libra pondo was abbreviated to lb when written. That got turned into a ligature ℔ when written hastily. That shape was the common ancestor of two symbols we use today: £ and #.
The eight-pointed symbol was (perhaps jokingly) renamed the octothorpe in the 1960s when it was added to telephone keypads. It’s still there on the digital keypad of your mobile phone. If you were to ask someone born in this millenium what that key is called, they would probably tell you it’s the hashtag key. And if they’re learning to read sheet music, I’ve heard tell that they refer to the sharp notes as hashtag notes.
If this upsets you, you might be the kind of person who rages at the word “literally” being used to mean “figuratively” or supermarkets with aisles for “10 items or less” instead of “10 items or fewer”.
Tough luck. The English language is agreement. That’s why English dictionaries exist not to dictate usage of the language, but to document usage.
It’s much the same with web standards bodies. They don’t carve the standards into tablets of stone and then come down the mountain to distribute them amongst the browsers. No, it’s what the browsers implement that gets carved in stone. That’s why it’s so important that browsers are in agreement. In the bad old days of the browser wars of the late 90s, we saw what happened when browsers implemented their own proprietary features.
Standards require interoperability.
Interoperability requires agreement.
So what we can learn from the history of standardisation?
Well, there are some direct lessons from the HTML design principles.
The priority of constituencies
Consider users over authors…
Listen, I want developer convenience as much as the next developer. But never at the expense of user needs.
I’ve often said that if I have the choice between making something my problem, and making it the user’s problem, I’ll make it my problem every time. That’s the job.
I worry that these days developer convenience is sometimes prized more highly than user needs. I think we could all use a priority of constituencies on every project we work on, and I would hope that we would prioritise users over authors.
Web content can degrade gracefully in older or less capable user agents…
I know that I go on about progressive enhancement a lot. Sometimes I make it sound like a silver bullet. Well, it kinda is.
I mean, you can’t just buy a bullet made of silver—you have to make it yourself. If you’re not used to crafting bullets from silver, it will take some getting used to.
Again, if developer convenience is your priority, silver bullets are hard to justify. But if you’re prioritising users over authors, progressive enhancement is the logical methodology to use.
Evolution, not revolution
It’s a testament to the power and flexibility of the web that we don’t have to build with progressive enhancement. We don’t have to build with a separation of concerns like structure, presentation, and behaviour.
But why do that? Is it because those native buttons and dropdowns might be inconsistent from browser to browser.
Consistency is not the purpose of the world wide web.
Universality is the key principle underlying the web.
Our patterns should reflect the intent of the medium.
Use what the browser gives you—build on top of those agreements. Because that’s the bigger lesson to be learned from the history of web standards, clocks, containers, and hashtags.
Our world is made up of incremental improvements to what has come before. And that’s how we will push forward to a better tomorrow: By building on top of what we already have instead of trying to create something entirely from scratch. And by working together to get agreement instead of going it alone.
The future can be a frightening prospect, and I often get people asking me for advice on how they should prepare for the web’s future. Usually they’re thinking about which programming language or framework or library they should be investing their time in. But these specific patterns matter much less than the broader principles of working together, collaborating and coming to agreement. It’s kind of insulting that we refer to these as “soft skills”—they couldn’t be more important.
Working on the web, it’s easy to get downhearted by the seemingly ephemeral nature of what we build. None of it is “real”; none of it is tangible. And yet, looking at the history of civilisation, it’s the intangibles that survive: ideas, philosophies, culture and concepts.
The future can be frightening because it is intangible and unknown. But like all the intangible pieces of our consensus reality, the future is something we construct …through agreement.
Now let’s agree to go forward together to build the future web!
This is a great piece by Alla, ostensibly about Bulb’s design principles, but it’s really about what makes for effective design principles in general. It’s packed full of great advice, like these design principles for design principles: