Journal

2643 sparkline

Tuesday, February 18th, 2020

Utopia

Trys and James recently unveiled their Utopia project. They’ve been tinkering away at it behind the scenes for quite a while now.

You can check out the website and read the blog to get the details of how it accomplishes its goal:

Elegantly scale type and space without breakpoints.

I may well be biased, but I really like this project. I’ve been asking myself why I find it so appealing. Here are a few of the attributes of Utopia that strike a chord with me…

It’s collaborative

Collaboration is at the heart of Clearleft’s work. I know everyone says that, but we’ve definitely seen a direct correlation: projects with high levels of collaboration are invariably more successful than projects where people are siloed.

The genesis for Utopia came about after Trys and James worked together on a few different projects. It’s all too easy to let design and development splinter off into their own caves, but on these projects, Trys and James were working (literally) side by side. This meant that they could easily articulate frustrations to one another, and more important, they could easily share their excitement.

The end result of their collaboration is some very clever code. There’s an irony here. This code could be used to discourage collaboration! After all, why would designers and developers sit down together if they can just pass these numbers back and forth?

But I don’t think that Utopia will appeal to designers and developers who work in that way. Born in the spirit of collaboration, I suspect that it will mostly benefit people who value collaboration.

It’s intrinsic

If you’re a control freak, you may not like Utopia. The idea is that you specify the boundaries of what you’re trying to accomplish—minimum/maximum font sizes, minumum/maximum screen sizes, and some modular scales. Then you let the code—and the browser—do all the work.

On the one hand, this feels like surrending control. But on the other hand, because the underlying system is so robust, it’s a way of guaranteeing quality, even in situations you haven’t accounted for.

If someone asks you, “What size will the body copy be when the viewport is 850 pixels wide?”, your answer would have to be “I don’t know …but I do know that it will be appropriate.”

This feels like a very declarative way of designing. It reminds me of the ethos behind Andy and Heydon’s site, Every Layout. They call it algorithmic layout design:

Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices.

See how breakpoints are mentioned as being a very top-down approach to layout? Remember the tagline for Utopia, which aims for fluid responsive design?

Elegantly scale type and space without breakpoints.

Unsurprisingly, Andy really likes Utopia:

As the co-author of Every Layout, my head nearly fell off from all of the nodding when reading this because this is the exact sort of approach that we preach: setting some rules and letting the browser do the rest.

Heydon describes this mindset as automating intent. I really like that. I think that’s what Utopia does too.

As Heydon said at Patterns Day:

Be your browser’s mentor, not its micromanager.

The idea is that you give it rules, you give it axioms or principles to work on, and you let it do the calculation. You work with the in-built algorithms of the browser and of CSS itself.

This is all possible thanks to improvements to CSS like calc, flexbox and grid. Jen calls this approach intrinsic web design. Last year, I liveblogged her excellent talk at An Event Apart called Designing Intrinsic Layouts.

Utopia feels like it has the same mindset as algorithmic layout design and intrinsic web design. Trys and James are building on the great work already out there, which brings me to the final property of Utopia that appeals to me…

It’s iterative

There isn’t actually much that’s new in Utopia. It’s a combination of existing techniques. I like that. As I said recently:

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

First of all, Utopia uses the idea of modular scales in typography. Tim Brown has been championing this idea for years.

Then there’s the idea of typography being fluid and responsive—just like Jason Pamental has been speaking and writing about.

On the code side, Utopia wouldn’t be possible without the work of Mike Reithmuller and his breakthroughs on responsive and fluid typography, which led to Tim’s work on CSS locks.

Utopia takes these building blocks and combines them. So if you’re wondering if it would be a good tool for one of your projects, you can take an equally iterative approach by asking some questions…

Are you using fluid type?

Do your font-sizes increase in proportion to the width of the viewport? I don’t mean in sudden jumps with @media breakpoints—I mean some kind of relationship between font size and the vw (viewport width) unit. If so, you’re probably using some kind of mechanism to cap the minimum and maximum font sizes—CSS locks.

I’m using that technique on Resilient Web Design. But I’m not changing the relative difference between different sized elements—body copy, headings, etc.—as the screen size changes.

Are you using modular scales?

Does your type system have some kind of ratio that describes the increase in type sizes? You probably have more than one ratio (unlike Resilient Web Design). The ratio for small screens should probably be smaller than the ratio for big screens. But rather than jump from one ratio to another at an arbitrary breakpoint, Utopia allows the ratio to be fluid.

So it’s not just that font sizes are increasing as the screen gets larger; the comparative difference is also subtly changing. That means there’s never a sudden jump in font size at any time.

Are you using custom properties?

A technical detail this, but the magic of Utopia relies on two powerful CSS features: calc() and custom properties. These two workhorses are used by Utopia to generate some CSS that you can stick at the start of your stylesheet. If you ever need to make changes, all the parameters are defined at the top of the code block. Tweak those numbers and watch everything cascade.

You’ll see that there’s one—and only one—media query in there. This is quite clever. Usually with CSS locks, you’d need to have a media query for every different font size in order to cap its growth at the maximum screen size. With Utopia, the maximum screen size—100vw—is abstracted into a variable (a custom property). The media query then changes its value to be the upper end of your CSS lock. So it doesn’t matter how many different font sizes you’re setting: because they all use that custom property, one single media query takes care of capping the growth of every font size declaration.

If you’re already using CSS locks, modular scales, and custom properties, Utopia is almost certainly going to be a good fit for you.

If you’re not yet using those techniques, but you’d like to, I highly recommend using Utopia on your next project.

Friday, February 7th, 2020

IncrementURL

Last month I wrote some musings on default browser behaviours. When it comes to all the tasks that browsers do for us, the most fundamental is taking a URL, fetching its contents and giving us the results. As part of that process, browsers also show us the URL of the page currently loaded in a tab or window.

But even at this fundamental level, there are some differences from browser to browser.

Safari only shows you the domain name—and any subdomain names—by default. It looks like nice and tidy, but it obfuscates what page you’re on (until you click on the domain name). This is bad.

Chrome shows you the full URL, nice and straightforward. This is neutral.

Firefox, like Chrome, shows you the full URL, but with a subtle difference. The important part of the URL—usually the domain name—is subtly highlighted in a darker shade of grey. This is good.

The reason I say that what it highlights is usually the domain name is because what it actually highlights is eTLD+1.

The what now?

Well, if you’re looking at a page on adactio.com, that’s the important bit. But what if you’re looking at a page on adactio.github.io? The domain name is important, but so is the subdomain.

It turns out there’s a list out there of which sites and top level domains allow registrations like this. This is the list that Firefox is using for its shading behaviour in displaying URLs.

Safari, by the way, does not use this list. These URLs are displayed identically in Safari, the phisherman’s friend:

  • example.com
  • example.github.io
  • github.example.com

Whereas Firefox displays them as:

  • example.com
  • example.github.io
  • github.example.com

I learned all this from Jake on a recent edition of HTTP 203. Nicolas Hoizey has writen a nice little summary.

Jake acknowledges that what Apple is doing is shisuboptimal, what Firefox is doing is good, and then puts forward an idea for what Chrome could do. (But please note that this is Jake’s personal opinion; not an official proposal from the Chrome team.)

There’s some prior art here. It used to be that, if your SSL certificate included extended validation, the name would be shown in green next to the padlock symbol. So while my website—which uses regular SSL from Let’s Encrypt—would just have a padlock, Medium—which uses EV SSL—would have a padlock and the text “A Medium Corporation”.

Extended validation wasn’t quite the bulletproof verification it was cracked up to be. So browsers don’t use that interface pattern any more.

Jake suggests repurposing this pattern for all URLs. Pull out the important bit—eTLD+1—and show it next to the padlock.

Screenshots of @JaffaTheCake’s idea for separating out the eTLD+1 part of a URL in a browser’s address bar. Screenshots of @JaffaTheCake’s idea for separating out the eTLD+1 part of a URL in a browser’s address bar.

I like this. The full URL is still displayed. This proposal is more of an incremental change. An enhancement that is applied progressively, if you will.

I also like that it builds on existing interface patterns—Firefox’s URL treatment and the deprecated treatment of EV certs. In fact, I think the first step for Chrome should be to match Firefox’s current behaviour, and then go further with something like Jake’s proposal.

This kind of gradual change was exactly what Chrome did with displaying https and http domains.

Chrome treatment for HTTPS pages.

Jake mentions this in the video

We’ve already seen that you have to take small steps here, like we did with the https change.

There’s a fascinating episode of the Freakonomics podcast called In Praise of Incrementalism. I’ve huffduffed it.

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

I’d love to see Chrome take the first steps to Jake’s proposal by following Firefox’s lead.

Then again, I’d love it if Chrome followed Firefox’s lead in implementing subgrid.

Thursday, February 6th, 2020

Hydration

As you may have noticed, I’m a fan of progressive enhancement.

It’s not cool. It’s often at odds with “modern” web development, so I end up looking like an old man yelling at a cloud to get off my lawn. Or something.

At its heart though, progressive enhancement seems fairly uncontroversial and inoffensive to me. It’s an approach. A mindset. Here’s how I describe it in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Progressive enhancement makes use of the principle of least power:

Choose the least powerful language suitable for a given purpose.

That’s step two of the three-step process. But the third step is vital.

I think a lot of the hostility towards progressive enhancement comes from a misunderstanding of that three-step process, perhaps thinking that it stops at step two. I’m sure that some have intrepreted progressive enhancement as preventing developers from using the latest and greatest technology. Nothing could be further from the truth!

Taking a layered approach to building on the web gives you permission to try cutting‐edge JavaScript APIs, regardless of how many or how few browsers currently implement them.

The most common misunderstanding of progressive enhancement is that it’s inherently about JavaScript. That’s not true. You can apply progressive enhancement at every step of front-end development: HTML, CSS, and JavaScript.

But because of JavaScript’s strict error-handling model (at least compared to HTML and CSS), it’s in the JavaScript layer that the lack of a progressive enhancement mindset is most often felt.

That’s why I was saddened by the rise of frameworks and mindsets that assume the availability of JavaScript. Single page apps generally follow this assumption. Everything is delivered via JavaScript: content, markup, styles, and behaviour.

This leads to a terrible situation for performance. The user is left staring at a blank screen, waiting for something—anything!—to appear. Browsers are optimised to stream HTML as soon as they can. Delivering your content via JavaScript rather than HTML means you’re not taking advantage of that optimisation. Your users suffer.

But I was very heartened when I saw the pendulum start to swing back the other way a bit…

Let’s say you’re using a JavaScript framework like React. But the reason you’re using it isn’t because you’re doing anything particularly complex in the browser involving state management. You might be using React because you really like the way it encourages modularity and componentisation.

A few years ago, making a single page app was pretty much the only way you could use React. For you as a developer to experience the benefits of modularity and componentisation, users had to pay the price in the payload (and fragility) of client-side JavaScript.

That’s no longer the case. Now that we can run JavaScript on the server, it’s possible to build in a modular, componentised way and still use progressive enhancement.

When I first heard about Gatsby and Next.js, I thought that was the selling point. Run React on the server; send pre-generated HTML down the wire to the user; then enhance with client-side JavaScript.

But that’s not exactly how it works. The pre-generated HTML isn’t functional. It still needs a bucketload of JavaScript before it can do anything. The actual process is: Run React on the server; send pre-generated HTML down the wire to the user; then send everything again but this time in JavaScript, bundled with the entire React library.

This leads to a situation for users that’s almost worse than before. Instead of staring at a blank screen, now they get HTML lickety-split—excellent! But if they try to interact with what’s on screen, they’ll find that nothing is working yet. Even worse, once the JavaScript is delivered, and is being parsed, they probably can’t even scroll—their device is too busy interpreting all that JavaScript. Your users suffer.

All your content is sent twice. First HTML is sent from the server. These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser). Then a JavaScript library—plus all your bespoke JavaScript—is loaded. Then all your content is loaded again as JSON.

So you’ve got a facade of an interface that you can’t actually interact with until a deluge of JavaScript has been loaded, parsed and executed. The term used for this stage of the process is “hydration”, which makes it sound more like a relaxing treatment from Gwyneth Paltrow than the horrible user experience it is.

The idea is that subsequent navigations—which will happen with Ajax—should be snappy. But the price has already been paid by then. The initial loading experience is jagged and frustrating.

Don’t get me wrong: server-side rendering is great …if what you’re sending from the server is functional. It’s the combination of hollow HTML sent from the server, followed by a huge browser-freezing dump of JavaScript that is an anti-pattern.

This use of server-side rendering followed by hydration feels like progressive enhancement, because it separates out the delivery of markup and scripts. But it’s missing the mindset.

The layered approach of progressive enhancement echoes the separation of concerns in the front-end stack: HTML, CSS, and JavaScript—each layer expressing more power. But while these concepts are related, they’re not interchangable. Separating out the layers of your tech stack isn’t necessarily progressive enhancement. If you have some HTML that relies on JavaScript to be useful, then there’s no benefit in separating that HTML into a separate payload. The HTML that you initially send down the wire needs to be functional (at least at a basic level) before the JavaScript arrives.

I was a little disappointed to see Kyle Simpson—who I admire greatly—conflate separation of concerns with progressive enhancement in his talk from JSCamp 2019:

This content is here. I can see it, and it’s even styled. But I can’t click on the damn button because nothing has loaded in the JavaScript layer yet.

Anybody experienced that where you’ve been on a web page and it’s not really fully functional yet? I can see something but I can’t actually make any usage of it yet.

These are all things that cropped out of our thought process that said: “Let’s build the web in layers. Let’s deliver it progressively in layers. Because that’s morally right. We call this progressive enhancement. And let’s not worry too much about all these potential user experience flaws that may happen.”

That’s a spot-on description of server-side rendering and hydration, but it’s a gross mischaracterisation of progressive enhancement.

That button that requires JavaScript to work? That should’ve been generated with JavaScript. (For example, if you’re building a complex web app, consider sending a read-only view down the wire in HTML—then add any interactive interface elements with JavaScript in the browser.)

If people are equating progressive enhancement with thoughtless server-side rendering and hydration, then I can see why they’d be hostile towards it.

Users would be better served with unprogressive non-enhancement:

You take some structured content, which follows the vertical flow of the document in a way that everyone understands.

Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.

Or if they’re using a touch device, simply flicking backwards and forwards in that easy way that we’ve all become used to. What you do is you take that, and you fucking well leave it alone.

Alas, that’s not what tools like Gatsby offer. The latest post on their blog is called Why Gatsby is better with JavaScript:

But what about sites or pages where there is no client-side interactivity? Even for those pages, Gatsby offers performance benefits by including JavaScript.

I beg to differ.

(By the way, that same blog post also initially tried to equate the performance hit of client-side JavaScript with the performance hit of images. Andy explains why that’s disingenuous.)

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Wednesday, February 5th, 2020

Design systems roundup

When I started writing a post about architects, gardeners, and design systems, it was going to be a quick follow-up to my post about web standards, dictionaries, and design systems. I had spotted an interesting metaphor in one of Frank’s posts, and I thought it was worth jotting it down.

But after making that connection, I kept writing. I wanted to point out the fetishism we have for creation over curation; building over maintenance.

Then the post took a bit of a dark turn. I wrote about how the most commonly cited reasons for creating a design system—efficiency and consistency—are the same processes that have led to automation and dehumanisation in the past.

That’s where I left things. Others have picked up the baton.

Dave wrote a post called The Web is Industrialized and I helped industrialize it. What I said resonated with him:

This kills me, but it’s true. We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it. We operate more like Taylor and his stopwatch and Gantt and his charts, maximizing effort and impact rather than focusing on the human aspects of product development.

But he also points out the many benefits of systemetising:

At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.

Emphasis mine. I think that’s a key phrase: “a willing team.”

Ethan tackles this in his post The design systems we swim in:

A design system that optimizes for consistency relies on compliance: specifically, the people using the system have to comply with the system’s rules, in order to deliver on that promised consistency. And this is why that, as a way of doing something, a design system can be pretty dehumanizing.

But a design system need not be a constraining straitjacket—a means of enforcing consistency by keeping creators from colouring outside the lines. Used well, a design system can be a tool to give creators more freedom:

Does the system you work with allow you to control the process of your work, to make situational decisions? Or is it simply a set of rules you have to follow?

This is key. A design system is the product of an organisation’s culture. That’s something that Brad digs into his post, Design Systems, Agile, and Industrialization:

I definitely share Jeremy’s concern, but also think it’s important to stress that this isn’t an intrinsic issue with design systems, but rather the organizational culture that exists or gets built up around the design system. There’s a big difference between having smart, reusable patterns at your disposal and creating a dictatorial culture designed to enforce conformity and swat down anyone coloring outside the lines.

Brad makes a very apt comparison with Agile:

Not Agile the idea, but the actual Agile reality so many have to suffer through.

Agile can be a liberating empowering process, when done well. But all too often it’s a quagmire of requirements, burn rates, and story points. We need to make sure that design systems don’t suffer the same fate.

Jeremy’s thoughts on industrialization definitely struck a nerve. Sure, design systems have the ability to dehumanize and that’s something to actively watch out for. But I’d also say to pay close attention to the processes and organizational culture we take part in and contribute to.

Matthew Ström weighed in with a beautifully-written piece called Breaking looms. He provides historical context to the question of automation by relaying the story of the Luddite uprising. Automation may indeed be inevitable, according to his post, but he also provides advice on how to approach design systems today:

We can create ethical systems based in detailed user research. We can insist on environmental impact statements, diversity and inclusion initiatives, and human rights reports. We can write design principles, document dark patterns, and educate our colleagues about accessibility.

Finally, the ouroboros was complete when Frank wrote down his thoughts in a post called Who cares?. For him, the issue of maintenance and care is crucial:

Care applies to the built environment, and especially to digital technology, as social media becomes the weather and the tools we create determine the expectations of work to be done and the economic value of the people who use those tools. A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement. Tools are always beholden to values. This is well-trodden territory.

Well-trodden territory indeed. Back in 2015, Travis Gertz wrote about Design Machines:

Designing better systems and treating our content with respect are two wonderful ideals to strive for, but they can’t happen without institutional change. If we want to design with more expression and variation, we need to change how we work together, build design teams, and forge our tools.

Also on the topic of automation, in 2018 Cameron wrote about Design systems and technological disruption:

Design systems are certainly a new way of thinking about product development, and introduce a different set of tools to the design process, but design systems are not going to lessen the need for designers. They will instead increase the number of products that can be created, and hence increase the demand for designers.

And in 2019, Kaelig wrote:

In order to be fulfilled at work, Marx wrote that workers need “to see themselves in the objects they have created”.

When “improving productivity”, design systems tooling must be mindful of not turning their users’ craft into commodities, alienating them, like cogs in a machine.

All of this is reminding me of Kranzberg’s first law:

Technology is neither good nor bad; nor is it neutral.

I worry that sometimes the messaging around design systems paints them as an inherently positive thing. But design systems won’t fix your problems:

Just stay away from folks who try to convince you that having a design system alone will solve something.

It won’t.

It’s just the beginning.

At the same time, a design system need not be the gateway drug to some kind of post-singularity future where our jobs have been automated away.

As always, it depends.

Remember what Frank said:

A well-made design system created for the right reasons is reparative. One created for the wrong reasons becomes a weapon for displacement.

The reasons for creating a design system matter. Those reasons will probably reflect the values of the company creating the system. At the level of reasons and values, we’ve gone beyond the bounds of the hyperobject of design systems. We’re dealing in the area of design ops—the whys of systemising design.

This is why I’m so wary of selling the benefits of design systems in terms of consistency and efficiency. Those are obviously tempting money-saving benefits, but followed to their conclusion, they lead down the dark path of enforced compliance and eventually, automation.

But if the reason you create a design system is to empower people to be more creative, then say that loud and proud! I know that creativity, autonomy and empowerment is a tougher package to sell than consistency and efficiency, but I think it’s a battle worth fighting.

Design systems are neither good nor bad (nor are they neutral).

Addendum: I’d just like to say how invigorating it’s been to read the responses from Dave, Ethan, Brad, Matthew, and Frank …all of them writing on their own websites. Rumours of the demise of blogging may have been greatly exaggerated.

Monday, February 3rd, 2020

Three books

Lurking: How a Person Became a User by Joanne McNeil will be published on February 25th.

In Lurking, Joanne McNeil digs deep and identifies the primary (if sometimes contradictory) concerns of people online: searching, safety, privacy, identity, community, anonymity, and visibility. She charts what it is that brought people online and what keeps us here even as the social equations of digital life—what we’re made to trade, knowingly or otherwise, for the benefits of the internet—have shifted radically beneath us. It is a story we are accustomed to hearing as tales of entrepreneurs and visionaries and dynamic and powerful corporations, but there is a more profound, intimate story that hasn’t yet been told.

Enemy of All Mankind: A True Story of Piracy, Power, and History’s First Global Manhunt by Steven Johnson will be published on May 12th:

Henry Every was the seventeenth century’s most notorious pirate. The press published wildly popular—and wildly inaccurate—reports of his nefarious adventures. The British government offered enormous bounties for his capture, alive or (preferably) dead. But Steven Johnson argues that Every’s most lasting legacy was his inadvertent triggering of a major shift in the global economy. Enemy of All Mankind focuses on one key event—the attack on an Indian treasure ship by Every and his crew—and its surprising repercussions across time and space. It’s the gripping tale one of the most lucrative crimes in history, the first international manhunt, and the trial of the seventeenth century.

How To Future: Leading and Sense-Making in an Age of Hyperchange by Scott Smith with Madeline Ashby will be published on July 3rd:

Successfully designing for a future requires a picture of that future—a useful map of the horizons ahead that can be used for wayfinding, identifying emerging opportunities or risks. Accurately developing this map means investing in better awareness of signals about the future, understanding trends in context, developing rich insights about what those signals indicate—relative to companies, people, citizens or stakeholders. It also means cultivating ways to share these future insights through tangible yet provocative scenarios or stories, turn these into prototypes, or connect them to strategies.

Friday, January 31st, 2020

Union

The nation I live in has decided to impose sanctions on itself. The government has yet to figure out the exact details. It won’t be good.

Today marks the day that the ironically-named Kingdom of Great Britain and Northern Ireland officially leaves the European Union. Nothing will change on a day to day basis (until the end of this year, when the shit really hits the fan).

Looking back on 2019, I had the pleasure and privelige of places that will remain in the European Union. Hamburg, Düsseldorf, Utrecht, Miltown Malbay, Kinsale, Madrid, Amsterdam, Paris, Frankfurt, Antwerp, Berlin, Vienna, Cobh.

Maybe I should do a farewell tour in 2020.

Grüße aus Hamburg!

Auf Wiedersehen, Düsseldorf!

Going for a stroll in Utrecht at dusk.

The road to Miltown.

Checked in at Kinsale Harbour. with Jessica

Checked in at La Casa del Bacalao. Tapas! — with Jessica

Hello Amsterdam!

Indoor aviation.

Guten Tag, Frankfurt.

Catch you later, Antwerp.

The Ballardian exterior of Tempelhof.

Losing my religion.

Boats in Cobh.

Wednesday, January 29th, 2020

Architects, gardeners, and design systems

I compared design systems to dictionaries. My point was that design systems—like language—can be approached in a prescriptivist or descriptivist manner. And I favour descriptivism.

A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

I think it’s more important for a design system to be accurate than beautiful.

Meanwhile, over on Frank’s website, he’s been documenting the process of its (re)design. He made an interesting comparison in his post Redesign: Gardening vs. Architecture. He talks about two styles of writing:

In interviews, Martin has compared himself to a gardener—forgoing detailed outlines and overly planned plot points to favor ideas and opportunities that spring up in the writing process. You see what grows as you write, then tend to it, nurture it. Each tendrilly digression may turn into the next big branch of your story. This feels right: good things grow, and an important quality of growth is that the significant moments are often unanticipated.

On the other side of writing is who I’ll call “the architect”—one who writes detailed outlines for plots and believes in the necessity of overt structure. It puts stock in planning and foresight. Architectural writing favors divisions and subdivisions, then subdivisions of the subdivisions. It depends on people’s ability to move forward by breaking big things down into smaller things with increasing detail.

It’s not just me, right? It all sounds very design systemsy, doesn’t it?

This is a false dichotomy, of course, but everyone favors one mode of working over the other. It’s a matter of personality, from what I can tell.

Replace “personality” with “company culture” and I think you’ve got an interesting analysis of the two different approaches to design systems. Descriptivist gardening and prescriptivist architecture.

Frank also says something that I think resonates with the evergreen debate about whether design systems stifle creativity:

It can be hard to stay interested if it feels like you’re painting by numbers, even if they are your own numbers.

I think Frank’s comparison—gardeners and architects—also speaks to something bigger than design systems…

I gave a talk last year called Building. You can watch it, listen to it, or read the transcript if you like. The talk is about language (sort of). There’s nothing about prescriptivism or descriptivism in there, but there’s lots about metaphors. I dive into the metaphors we use to describe our work and ourselves: builders, engineers, and architects.

It’s rare to find job titles like software gardener, or information librarian (even though they would be just as valid as other terms we’ve made up like software engineer or information architect). Outside of the context of open source projects, we don’t talk much about maintenance. We’re much more likely to talk about making.

Back in 2015, Debbie Chachra wrote a piece in the Atlantic Monthly called Why I Am Not a Maker:

When tech culture only celebrates creation, it risks ignoring those who teach, criticize, and take care of others.

Anyone who’s spent any time working on design systems can tell you there’s no shortage of enthusiasm for architecture and making—“let’s build a library of components!”

There’s less enthusiasm for gardening, care, communication and maintenance. But that’s where the really important work happens.

In her article, Debbie cites Ethan’s touchstone:

In her book The Real World of Technology, the metallurgist Ursula Franklin contrasts prescriptive technologies, where many individuals produce components of the whole (think about Adam Smith’s pin factory), with holistic technologies, where the creator controls and understands the process from start to finish.

(Emphasis mine.)

In that light, design systems take their place in a long history of dehumanising approaches to manufacturing like Taylorism. The priorities of “scientific management” are the same as those of design systems—increasing efficiency and enforcing consistency.

Humans aren’t always great at efficiency and consistency, but machines are. Automation increases efficiency and consistency, sacrificing messy humanity along the way:

Machine with the strength of a hundred men
Can’t feed and clothe my children.

Historically, we’ve seen automation in terms of physical labour—dock workers, factory workers, truck drivers. As far as I know, none of those workers participated in the creation of their mechanical successors. But when it comes to our work on the web, we’re positively eager to create the systems to make us redundant.

The usual response to this is the one given to other examples of automation: you’ll be free to spend your time in a more meaningful way. With a design system in place, you’ll be freed from the drudgery of manual labour. Instead, you can spend your time doing more important work …like maintaining the design system.

You’ve heard the joke about the factory of the future, right? The factory of the future will have just two living things in it: one worker and one dog. The worker is there to feed the dog. The dog is there to bite the worker if he touches anything.

Good joke.

Everybody laugh.

Roll on snare drum.

Curtains.

Thursday, January 23rd, 2020

Web standards, dictionaries, and design systems

Years ago, the world of web standards was split. Two groups—the W3C and the WHATWG—were working on the next iteration of HTML. They had different ideas about the nature of standardisation.

Broadly speaking, the W3C followed a specification-first approach. Figure out what should be implemented first and foremost. From this perspective, specs can be seen as blueprints for browsers to work from.

The WHATWG, by contrast, were implementation led. The way they saw it, there was no point specifying something if browsers weren’t going to implement it. Instead, specs are there to document existing behaviour in browsers.

I’m over-generalising somewhat in my descriptions there, but the point is that there was an ideological difference of opinion around what standards bodies should do.

This always reminded me of a similar ideological conflict when it comes to language usage.

Language prescriptivists attempt to define rules about what’s right or right or wrong in a language. Rules like “never end a sentence with a preposition.” Prescriptivists are generally fighting a losing battle and spend most of their time bemoaning the decline of their language because people aren’t following the rules.

Language descriptivists work the exact opposite way. They see their job as documenting existing language usage instead of defining it. Lexicographers—like Merriam-Webster or the Oxford English Dictionary—receive complaints from angry prescriptivists when dictionaries document usage like “literally” meaning “figuratively”.

Dictionaries are descriptive, not prescriptive.

I’ve seen the prescriptive/descriptive divide somewhere else too. I’ve seen it in the world of design systems.

Jordan Moore talks about intentional and emergent design systems:

There appears to be two competing approaches in designing design systems.

An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.

An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).

An intentional design system is prescriptive. An emergent design system is descriptive.

I think we can learn from the worlds of web standards and dictionaries here. A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.

I think it’s more important for a design system to be accurate than beautiful.

As Matthew Ström says, you should start with the design system you already have:

Instead of drawing a whole new set of components, start with the components you already have in production. Document them meticulously. Create a single source of truth for design, warts and all.

Monday, January 20th, 2020

Unity

It’s official. Microsoft’s Edge browser is running on the Blink rendering engine and it’s available now.

Just over a year ago, I wrote about my feelings on this decision:

I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

The importance of browser engine diversity is beautifully illustrated (literally) in Rachel’s The Ecological Impact of Browser Diversity.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

Thursday, January 16th, 2020

Indie Web Camp London 2020

Do you have plans for the weekend of March 14th and 15th?

If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.

Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.

If you’re wondering whether this is for you, ask yourself if any of this situations apply:

  • You don’t have your own website yet, but you want one.
  • You have your own website, but you need some help with it.
  • You have some ideas about the independent web.
  • You have your own website but you never seem to find the time to update it.
  • You’d like to help other people with their websites.

If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!