Journal tags: tools

24

sparkline

Declarative design systems

When I wrote about the idea of declarative design it really resonated with a lot of people.

I think that there’s a general feeling of frustration with the imperative approach to designing and developing—attempting to specify everything exactly up front. It just doesn’t scale. As Jason put it, the traditional web design process is fundamentally broken:

This is the worst of all worlds—a waterfall process creating dozens of artifacts, none of which accurately capture how the design will look and behave in the browser.

In theory, design systems could help overcome this problem; spend a lot of time up front getting a component to be correct and then it can be deployed quickly in all sorts of situations. But the word “correct” is doing a lot of work there.

If you’re approaching a design system with an imperative mindset then “correct” means “exact.” With this approach, precision is seen as valuable: precise spacing, precise numbers, precise pixels.

But if you’re approaching a design system with a declarative mindset, then “correct” means “resilient.” With this approach, flexibility is seen as valuable: flexible spacing, flexible ranges, flexible outputs.

These are two fundamentally different design approaches and yet the results of both would be described as a design system. The term “design system” is tricky enough to define as it is. This is one more layer of potential misunderstanding: one person says “design system” and means a collection of very precise, controlled, and exact components; another person says “design system” and means a predefined set of boundary conditions that can be used to generate components.

Personally, I think the word “system” is the important part of a design system. But all too often design systems are really collections rather than systems: a collection of pre-generated components rather than a system for generating components.

The systematic approach is at the heart of declarative design; setting up the rules and ratios in advance but leaving the detail of the final implementation to the browser at runtime.

Let me give an example of what I think is a declarative approach to a component. I’ll use the “hello world” of design system components—the humble button.

Two years ago I wrote about programming CSS to perform Sass colour functions. I described how CSS features like custom properties and calc() can be used to recreate mixins like darken() and lighten().

I showed some CSS for declaring the different colour elements of a button using hue, saturation and lightness encoded as custom properties. Here’s a CodePen with some examples of different buttons.

See the Pen Button colours by Jeremy Keith (@adactio) on CodePen.

If these buttons were in an imperative design system, then the output would be the important part. The design system would supply the code needed to make those buttons exactly. If you need a different button, it would have to be added to the design system as a variation.

But in a declarative design system, the output isn’t as important as the underlying ruleset. In this case, there are rules like:

For the hover state of a button, the lightness of its background colour should dip by 5%.

That ends up encoded in CSS like this:

button:hover {
    background-color: hsl(
        var(--button-colour-hue),
        var(--button-colour-saturation),
        calc(var(--button-colour-lightness) - 5%)
    );
}

In this kind of design system you can look at some examples to see the results of this rule in action. But those outputs are illustrative. They’re not the final word. If you don’t see the exact button you want, that’s okay; you’ve got the information you need to generate what you need and still stay within the pre-defined rules about, say, the hover state of buttons.

This seems like a more scalable approach to me. It also seems more empowering.

One of the hardest parts of embedding a design system within an organisation is getting people to adopt it. In my experience, nobody likes adopting something that’s being delivered from on-high as a pre-made sets of components. It’s meant to be helpful: “here, use this pre-made components to save time not reinventing the wheel”, but it can come across as overly controlling: “we don’t trust you to exercise good judgement so stick to these pre-made components.”

The declarative approach is less controlling: “here are pre-defined rules and guidelines to help you make components.” But this lack of precision comes at a cost. The people using the design system need to have the mindset—and the ability—to create the components they need from the systematic rules they’ve been provided.

My gut feeling is that the imperative mindset is a good match for most of today’s graphic design tools like Figma or Sketch. Those tools deal with precise numbers rather than ranges and rules.

The declarative mindset, on the other hand, increasingly feels like a good match for CSS. The language has evolved to allow rules to be set up through custom properties, calc(), clamp(), minmax(), and so on.

So, as always, there isn’t a right or wrong approach here. It all comes down to what’s most suitable for your organisation.

If your designers and developers have an imperative mindset and Figma files are considered the source of truth, than they would be better served by an imperative design system.

But if you’re lucky enough to have a team of design engineers that think in terms of HTML and CSS, then a declarative design system will be a force multiplier. A bicycle for the design engineering mind.

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

Declarative design

I feel like in the past few years there’s been a number of web design approaches that share a similar mindset. Intrinsic web design by Jen; Every Layout by Andy and Heydon; Utopia by Trys and James.

To some extent, their strengths lie in technological advances in CSS: flexbox, grid, calc, and so on. But more importantly, they share an approach. They all focus on creating the right inputs rather than trying to control every possible output. Leave the final calculations for those outputs to the browser—that’s what computers are good at.

As Andy puts it:

Be the browser’s mentor, not its micromanager.

Reflecting on Utopia’s approach, Jim Nielsen wrote:

We say CSS is “declarative”, but the more and more I write breakpoints to accommodate all the different ways a design can change across the viewport spectrum, the more I feel like I’m writing imperative code. At what quantity does a set of declarative rules begin to look like imperative instructions?

In contrast, one of the principles of Utopia is to be declarative and “describe what is to be done rather than command how to do it”. This approach declares a set of rules such that you could pick any viewport width and, using a formula, derive what the type size and spacing would be at that size.

Declarative! Maybe that’s the word I’ve been looking for to describe the commonalities between Utopia, Every Layout, and intrinsic web design.

So if declarative design is a thing, does that also mean imperative design is also a thing? And what might the tools and technologies for imperative design look like?

I think that Tailwind might be a good example of an imperative design tool. It’s only about the specific outputs. Systematic thinking is actively discouraged; instead you say exactly what you want the final pixels on the screen to be.

I’m not saying that declarative tools—like Utopia—are right and that imperative tools—like Tailwind—are wrong. As always, it depends. In this case, it depends on the mindset you have.

If you agree with this statement, you should probably use an imperative design tool:

CSS is broken and I want my tools to work around the way CSS has been designed.

But if you agree with this statement, you should probably use a declarative design tool:

CSS is awesome and I want my tools to amplify the way that CSS had been designed.

If you agree with the first statement but you then try using a declarative tool like Utopia or Every Layout, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

Likewise if you agree with the second statement but you then try using an imperative tool like Tailwind, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

It all depends on whether the philosophy behind the tool matches your own philosophy. If those philosophies match up, then using the tool will be productive and that tool will act as an amplifier—a bicycle for the mind. But if the philosophy of the tool doesn’t match your own philosophy, then you will be fighting the tool at every step—it will slow you down.

Knowing that this spectrum exists between declarative tools and imperative tools can help you when you’re evaluating technology. You can assess whether a web design tool is being marketed on the premise that CSS is broken or on the premise that CSS is awesome.

I wonder whether your path into web design and development might also factor into which end of the spectrum you’d identify with. Like, if your background is in declarative languages like HTML and CSS, maybe intrisic web design really resonates. But if your background is in imperative languages like JavaScript, perhaps Tailwind makes more sense to you.

Again, there’s no right or wrong here. This is about matching the right tool to the right mindset.

Personally, the declarative design approach fits me like a glove. It feels like it’s in the tradition of John’s A Dao Of Web Design or Ethan’s Responsive Web Design—ways of working with the grain of the web.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Employee experience design on the Clearleft podcast

The second episode of the second season of the Clearleft podcast is out. It’s all about employee experience design.

This topic came out of conversations with Katie. She really enjoys getting stuck into to the design challenges of the “backstage” tools that are often neglected. This is an area that Chris has been working in recently too, so I quized him on this topic.

They’re both super smart people which makes for a thoroughly enjoyable podcast episode. I usually have more guests on a single episode but it was fun to do a two-hander for once.

The whole thing comes in at just under seventeen minutes and there are some great stories and ideas in there. Have a listen.

And if you’re enjoying listening to the Clearleft podcast as much as I’m enjoying making it, be sure to spread the word wherever you share your recommnedations: Twitter, LinkedIn, Slack, your own website, the rooftop.

Kinopio

Cennydd asked for recommendations on Twitter a little while back:

Can anyone recommend an outlining app for macOS? I’m falling out with OmniOutliner. Not Notion, please.

This was my response:

The only outlining tool that makes sense for my brain is https://kinopio.club/

It’s more like a virtual crazy wall than a virtual Dewey decimal system.

I’ve written before about how I prepare a conference talk. The first step involves a sheet of A3 paper:

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

Kinopio is like a digital version of that A3 sheet of paper. It doesn’t force any kind of hierarchy on your raw ingredients. You can clump things together, join them up, break them apart, or just dump everything down in one go. That very much suits my approach to preparing something like a talk (or a book). The act of organising all the parts into a single narrative timeline is an important challenge, but it’s one that I like to defer to later. The first task is braindumping.

When I was preparing my talk for An Event Apart Online, I used Kinopio.club to get stuff out of my head. Here’s the initial brain dump. Here are the final slides. You can kind of see the general gist of the slidedeck in the initial brain dump, but I really like that I didn’t have to put anything into a sequential outline.

In some ways, Kinopio is like an anti-outlining tool. It’s scrappy and messy—which is exactly why it works so well for the early part of the process. If I use a tool that feels too high-fidelity too early on, I get a kind of impedence mismatch between the state of the project and the polish of the artifact.

I like that Kinopio feels quite personal. Unlike Google Docs or other more polished tools, the documents you make with this aren’t really for sharing. Still, I thought I’d share my scribblings anyway.

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Lighthouse bookmarklet

I use Firefox. You should too. It’s fast, secure, and more privacy-focused than the leading browser from the big G.

When it comes to web development, the CSS developer tooling in Firefox is second-to-none. But when it comes to JavaScript and network-related debugging (like service workers), Chrome’s tools are currently better than Firefox’s (for now). For example, Chrome has a tab in its developer tools that lets you run Lighthouse on the currently open tab.

Yesterday, I got the Calibre newsletter, which always has handy performance-related links from Karolina. She pointed to a Lighthouse extension for Firefox. “Excellent!”, I thought, and I immediately installed it. But I had some qualms about installing a plug-in from Google into a browser from Mozilla, particularly as the plug-in page says:

This is not a Recommended Extension. Make sure you trust it before installing

Well, I gave it a go. It turns out that all it actually does is redirect to the online version of Lighthouse. “Hang on”, I thought. “This could just be a bookmarklet!”

So I immediately uninstalled the browser extension and made this bookmarklet:

Lighthouse

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you’re on a site that you want to test.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Request mapping

The Request Map Generator is a terrific tool. It’s made by Simon Hearne and uses the WebPageTest API.

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

I was wondering… Wouldn’t it be great if this were built into browsers?

We already have a “Network” tab in our developer tools. The purpose of this tab is to show requests coming in. The browser already has all the information it needs to make a diagram of requests in the same that the request map generator does.

In Firefox, there’s a little clock icon in the bottom left corner of the “Network” tab. Clicking that shows a pie-chart view of requests. That’s useful, but I’d love it if there were the option to also see the connected circles that the request map generator shows.

Just a thought.

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

Ch-ch-ch-changes

It’s browser updatin’ time! Firefox 65 just dropped. So did Chrome 72. Safari 12.1 is shipping with iOS 12.2.

It’s interesting to compare the release notes for each browser and see the different priorities reflected in them (this is another reason why browser diversity is A Good Thing).

A lot of the Firefox changes are updates to dev tools; they just keep getting better and better. In fact, I’m not sure “dev tools” is the right word for them. With their focus on layout, typography, and accessibility, “design tools” might be a better term.

Oh, and Firefox is shipping support for some CSS properties that really help with print style sheets, so I’m disproportionately pleased about that.

In Safari’s changes, I’m pleased to see that the datalist element is finally getting implemented. I’ve been a fan of that element for many years now. (Am I a dork for having favourite HTML elements? Or am I a dork for even having to ask that question?)

And, of course, it wouldn’t be a Safari release without a new made up meta tag. From the people who brought you such hits as viewport and apple-mobile-web-app-capable, comes …supported-color-schemes (Apple likes to make up meta tags almost as much as Google likes to make up rel values).

There’ll be a whole bunch of improvements in how progressive web apps will behave once they’ve been added to the home screen. We’ll finally get some state persistence if you navigate away from the window!

Updated the behavior of websites saved to the home screen on iOS to pause in the background instead of relaunching each time.

Maximiliano Firtman has a detailed list of the good, the bad, and the “not sure yet if good” for progressive web apps on iOS 12.2 beta. Thomas Steiner has also written up the progress of progressive web apps in iOS 12.2 beta. Both are published on Ev’s blog.

At first glance, the release notes for Chrome 72 are somewhat paltry. The big news doesn’t even seem to be listed there. Maximiliano Firtman again:

Chrome 72 for Android shipped the long-awaited Trusted Web Activity feature, which means we can now distribute PWAs in the Google Play Store!

Very interesting indeed! I’m not sure if I’m ready to face the Kafkaesque process of trying to add something to the Google Play Store just yet, but it’s great to know that I can. Combined with the improvements coming in iOS 12.2, these are exciting times for progressive web apps!

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Switching

Chris has written about switching code editors. I’m a real stick-in-the-mud when it comes to switching editors. Partly that’s because I’m generally pretty happy with whatever I’m using (right now it’s Atom) but it’s also because I just don’t get that excited about software like this. I probably should care more; I spend plenty of time inside a code editor. And I should really take the time to get to grips with features like keyboard shortcuts—I’m sure I’m working very inefficiently. But, like I said, I find it hard to care enough, and on the whole, I’m content.

I was struck by this observation from Chris:

When moving, I have to take time to make sure it works pretty much like the old one.

That reminded me of a recent switch I made, not with code editors, but with browsers.

I’ve been using Chrome for years. One day it started crashing a lot. So I decided to make the switch to Firefox. Looking back, I’m glad to have had this prompt—I think it’s good to shake things up every now and then, so I don’t get too complacent (says the hypocrite who can’t be bothered to try a new code editor).

Just as Chris noticed with code editors, it was really important that I could move bookmarks (and bookmarklets!) over to my new browser. On the whole, it went pretty smoothly. I had to seek out a few browser extensions but that was pretty much it. And because I use a password manager, logging into all my usual services wasn’t a hassle.

Of all the pieces of software on my computer, the web browser is the one where I definitely spend the most time: reading, linking, publishing. At this point, I’m very used to life with Firefox as my main browser. It’s speedy and stable, and the dev tools are very similar to Chrome’s.

Maybe I’ll switch to Safari at some point. Like I said, I think it’s good to shake things up and get out of my comfort zone.

Now, if I really wanted to get out of my comfort zone, I’d switch operating systems like Dave did with his move to Windows. And I should really try using a different phone OS. Again, this is something that Dave tried with his switch to Android (although that turned out to be unacceptably creepy), and Paul did it ages ago using a Windows phone for a week.

There’s probably a balance to be struck here. I think it’s good to change code editors, browsers, even operating systems and phones every now and then, but I don’t want to feel like I’m constantly in learning mode. There’s something to be said for using tools that are comfortable and familiar, even if they’re outdated.

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

Choosing tools for scaling design

Tools and processes are intertwined. A company or a department or an individual has a way of doing things—that’s the process. They also have software to carry out the process—those are the tools.

Ideally, they should be loosely coupled. You should be able to change your tools without necessarily changing your process. So swapping out, say, one framework or library for another shouldn’t involve fundamentally changing the way you work. Likewise, trying a new way of working shouldn’t require you to use unfamiliar tools.

When it comes to scaling design within organisations, the challenges are almost always around switching processes (well, really it’s about trying to change culture, but that starts with changing processes—any sufficiently advanced process is indistinguishable from culture). All too often, though, I see people getting hung up on the tools.

We need to get more efficient in how we deliver designs …so let’s switch over to this particular design tool.

We should have a design system …so let’s get everyone using this particular JavaScript framework.

I understand this desire to shortcut the work of figuring out processes and jump straight to production solutions. For one thing, it allows you to create an easy list of requirements when it comes to recruiting talent: “Join our company—you must demonstrate experience and proficiency in this tool or that library.”

But when tools and processes become tightly coupled like this, there’s a real danger of stagnation. If a process can be defined as “the way we do things around here”, that’s not something you want to tie to any particular tool or technology. Otherwise, before you know it, you’re in the frustrating situation of using outdated tools, but you can’t swap them out for newer or better-suited technologies without disrupting everyone’s work.

This is technical debt (although it applies just as much to design). You’re paying a penalty in the present because of a decision that somebody made in the past. The problem isn’t so much with the decision itself, but with the longevity of its effects.

I think it’s important to remember what a tool is: it’s a piece of technology that enables you to work faster or better. You should enjoy using your tools, but you shouldn’t be utterly dependent on any particular one. Otherwise, the tail starts wagging the dog—you are now in service to the tool, instead of the other way around.

Treat your tools like cattle, not pets. Don’t get too attached to any one technology to the detriment of missing out on others.

Mind you, if you constantly tried every single new tool or technology out there, you’d never settle on anything—I’m pretty sure that three new JavaScript frameworks have been released since you started reading this paragraph.

The tools you choose at any particular time should be suited to what you’re trying to accomplish at that time. In other words, you’ve got to figure out what you’re trying to accomplish first (the vision), then figure out how you’re going to accomplish it (the process), and only then figure out which tools are the best fit. If you jump straight to choosing tools, you could end up trying to tighten a screw with a hammer.

Alas, I’ve seen plenty of consultants who conflate strategy with tooling. They’re brought in to solve process problems and, surprise, surprise, the solution always seems to involve purchasing the software that their company sells. I’ve been guilty of this myself: I see an organisation struggling to systemise their design patterns, and I think “Oh, they should use Fractal!” …but that’s jumping the gun. They might be better served with something simpler, or something more complex (I mean, Fractal is very, very flexible but it’s still just one option—there are plenty of other pattern library tools out there).

Once you separate out the tools from the process, there’s an added benefit. Making the right technology choice is no longer a life-or-death decision. You can suck it and see. Try out the technology and see if it works. If it’s working, great! Carry on using it. If it’s not working, that’s okay too. Try something different.

I realise I’m oversimplifying things, but I honestly believe that the real challenge is not choosing the right tools, but figuring out the right process for your team.

Design systems

Talking about scaling design can get very confusing very quickly. There are a bunch of terms that get thrown around: design systems, pattern libraries, style guides, and components.

The generally-accepted definition of a design system is that it’s the outer circle—it encompasses pattern libraries, style guides, and any other artefacts. But there’s something more. Just because you have a collection of design patterns doesn’t mean you have a design system. A system is a framework. It’s a rulebook. It’s what tells you how those patterns work together.

This is something that Cennydd mentioned recently:

Here’s my thing with the modularisation trend in design: where’s the gestalt?

In my mind, the design system is the gestalt. But Cennydd is absolutely right to express concern—I think a lot of people are collecting patterns and calling the resulting collection a design system. No. That’s a pattern library. You still need to have a framework for how to use those patterns.

I understand the urge to fixate on patterns. They’re small enough to be manageable, and they’re tangible—here’s a carousel; here’s a date-picker. But a design system is big and intangible.

Games are great examples of design systems. They’re frameworks. A game is a collection of rules and constraints. You can document those rules and constraints, but you can’t point to something and say, “That is football” or “That is chess” or “That is poker.”

Even though they consist entirely of rules and constraints, football, chess, and poker still produce an almost infinite possibility space. That’s quite overwhelming. So it’s easier for us to grasp instances of football, chess, and poker. We can point to a particular occurrence and say, “That is a game of football”, or “That is a chess match.”

But if you tried to figure out the rules of football, chess, or poker just from watching one particular instance of the game, you’d have your work cut for you. It’s not impossible, but it is challenging.

Likewise, it’s not very useful to create a library of patterns without providing any framework for using those patterns.

I would go so far as to say that the actual code for the patterns is the least important part of a design system (or, certainly, it’s the part that should be most malleable and open to change). It’s more important that the patterns have been identified, named, described, and crucially, accompanied by some kind of guidance on usage.

I could easily imagine using a tool like Fractal to create a library of text snippets with no actual code. Those pieces of text—which provide information on where and when to use a pattern—could be more valuable than providing a snippet of code without any context.

One of the very first large-scale pattern libraries I can remember seeing on the web was Yahoo’s Design Pattern Library. Each pattern outlined

  1. the problem being solved;
  2. when to use this pattern;
  3. when not to use this pattern.

Only then, almost incidentally, did they link off to the code for that pattern. But it was entirely possible to use the system of patterns without ever using that code. The code was just one instance of the pattern. The important part was the framework that helped you understand when and where it was appropriate to use that pattern.

I think we lose sight of the real value of a design system when we focus too much on the components. The components are the trees. The design system is the forest. As Paul asked:

What methodologies might we uncover if we were to focus more on the relationships between components, rather than the components themselves?

Understandable excitement

An Event Apart Seattle just wrapped. It was a three-day special edition and it was really rather good. Lots of the speakers (myself included) were unveiling brand new talks, so there was a real frisson of excitement.

It was interesting to see repeating, overlapping themes. From a purely technical perspective, three technologies that were front and centre were:

  • CSS grid,
  • variable fonts, and
  • service workers.

From listening to other attendees, the overwhelming message received was “These technologies are here—they’ve arrived.” Now, depending on your mindset, that understanding can be expressed as “Oh shit! These technologies are here!” or “Yay! Finally! These technologies are here!”

My reaction is very firmly the latter. That in itself is an interesting data-point, because (as discussed in my talk) my reaction towards new technological advances isn’t always one of excitement—quite often it’s one of apprehension, even fear.

I’ve been trying to self-analyse to figure out which kinds of technologies trigger which kind of reaction. I don’t have any firm answers yet, but it’s interesting to note that the three technologies mentioned above (CSS grid, variable fonts, and service workers) are all additions to the core languages of the web—the materials we use to build the web. Frameworks, libraries, build tools, and other such technologies are more like tools than materials. I tend to get less excited about advances in those areas. Sometimes advances in those areas not only fail to trigger excitement, they make me feel overwhelmed and worried about falling behind.

Since figuring out this split between materials and tools, it has helped me come to terms with my gut emotional reaction to the latest technological advances on the web. I think it’s okay that I don’t get excited about everything. And given the choice, I think maybe it’s healthier to be more excited about advances in the materials—HTML, CSS, and JavaScript APIs—than advances in tooling …although, it is, of course, perfectly possible to get equally excited about both (that’s just not something I seem to be able to do).

Another split I’ve noticed is between technologies that directly benefit users, and technologies that directly benefit developers. I think there was a bit of a meta-thread running through the talks at An Event Apart about CSS grid, variable fonts, and service workers: all of those advances allow us developers to accomplish more with less. They’re good for performance, in other words. I get much more nervous about CSS frameworks and JavaScript libraries that allow us to accomplish more, but require the user to download the framework or library first. It feels different when something is baked into browsers—support for CSS features, or JavaScript APIs. Then it feels like much more of a win-win situation for users and developers. If anything, the onus is on developers to take the time and do the work and get to grips with these browser-native technologies. I’m okay with that.

Anyway, all of this helps me understand my feelings at the end of An Event Apart Seattle. I’m fired up and eager to make something with CSS grid, variable fonts, and—of course—service workers.

A workshop on evaluating technology

After hacking away at Indie Web Camp Düsseldorf, I stuck around for Beyond Tellerrand. I ended up giving a talk, stepping in for Ellen. I was a poor substitute, but I hope I entertained the lovely audience for 45 minutes.

After Beyond Tellerrand, I got on a train to Nuremberg …along with a dozen of my peers who were also at the event.

All aboard the Indie Web Train from Düsseldorf to Nürnberg. Indie Web Train.

I arrived right in the middle of Web Week Nürnberg. Among the many events going on was a workshop that Joschi arranged for me to run called Evaluating Technology. The workshop version of my Beyond Tellerrand talk, basically.

This was an evolution of a workshop I ran a while back. I have to admit, I was a bit nervous going into this. I had no tangible material prepared; no slides, no handouts, nothing. Instead the workshop is a collaborative affair. In order for it to work, the attendees needed to jump in and co-create it with me. Luckily for me, I had a fantastic and enthusiastic group of people at my workshop.

Evaluating Technology

We began with a complete braindump. “Name some tools and technologies,” I said. “Just shout ‘em out.” Shout ‘em out, they did. I struggled to keep up just writing down everything they said. This was great!

Evaluating Technology

The next step was supposed to be dot-voting on which technologies to cover, but there were so many of them, we introduced an intermediate step: grouping the technologies together.

Evaluating Technology Evaluating Technology

Once the technologies were grouped into categories like build tools, browser APIs, methodologies etc., we voted on which categories to cover, only then diving deeper into specific technologies.

I proposed a number of questions to ask of each technology we covered. First of all, who benefits from the technology? Is it a tool for designers and developers, or is it a tool for the end user? Build tools, task runners, version control systems, text editors, transpilers, and pattern libraries all fall into the first category—they make life easier for the people making websites. Browser features generally fall into the second category—they improve the experience for the end user.

Looking at user-facing technologies, we asked: how well do they fail? In other words, can you add this technology as an extra layer of enhancement on top of what you’re building or do you have to make it a foundational layer that’s potentially a single point of failure?

For both classes of technologies, we asked the question: what are the assumptions? What fundamental philosophy has been baked into the technology?

Evaluating Technology Evaluating Technology

Now, the point of this workshop is not for me to answer those questions. I have a limited range of experience with the huge amount of web technologies out there. But collectively all of us attending the workshop will have a good range of experience and knowledge.

Interesting then that the technologies people voted for were:

  • service workers,
  • progressive web apps,
  • AMP,
  • web components,
  • pattern libraries and design systems.

Those are topics I actually do have some experience with. Lots of the attendees had heard of these things, they were really interested in finding out more about them, but they hadn’t necessarily used them yet.

And so I ended up doing a lot of the talking …which wasn’t the plan at all! That was just the way things worked out. I was more than happy to share my opinions on those topics, but it was of a shame that I ended up monopolising the discussion. I felt for everyone having to listen to me ramble on.

Still, by the end of the day we had covered quite a few topics. Better yet, we had a good framework for categorising and evaluating web technologies. The specific technologies we covered were interesting enough, but the general approach provided the lasting value.

All in all, a great day with a great group of people.

Evaluating Technology

I’m already looking forward to running this workshop again. If you think it would be valuable for your company, get in touch.