Tags: frontend

110

sparkline

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Console methods

Whenever I create a fetch event inside a service worker, my code roughly follows the same pattern. There’s a then clause which gets executed if the fetch is successful, and a catch clause in case anything goes wrong:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( fetchError => {
    // Boo! It failed.
});

In my book—Going Offline—I’m at pains to point out that those arguments being passed into each clause are yours to name. In this example I’ve called them fetchResponse and fetchError but you can call them anything you want.

I always do something with the fetchResponse inside the then clause—either I want to return the response or put it in a cache.

But I rarely do anything with fetchError. Because of that, I’ve sometimes made the mistake of leaving it out completely:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( () => {
    // Boo! It failed.
});

Don’t do that. I think there’s some talk of making the error argument optional, but for now, some browsers will get upset if it’s not there.

So always include that argument, whether you call it fetchError or anything else. And seeing as it’s an error, this might be a legitimate case for outputing it to the browser’s console, even in production code.

And yes, you can output to the console from a service worker. Even though a service worker can’t access anything relating to the document object, you can still make use of window.console, known to its friends as console for short.

My muscle memory when it comes to sending something to the console is to use console.log:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.log(fetchError);
});

But in this case, the console.error method is more appropriate:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.error(fetchError);
});

Now when there’s a connectivity problem, anyone with a console window open will see the error displayed bold and red.

If that seems a bit strident to you, there’s always console.warn which will still make the output stand out, but without being quite so alarmist:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.warn(fetchError);
});

That said, in this case, console.error feels like the right choice. After all, it is technically an error.

Greater expectations

I got an intriguing email recently from someone who’s a member of The Session, the community website about Irish traditional music that I run. They said:

When I recently joined, I used my tablet to join. Somewhere I was able to download The Session app onto my tablet.

But there is no native app for The Session. Although, as it’s a site that I built, it is, a of course, progressive web app.

They went on to say:

I wanted to put the app on my phone but I can’t find the app to download it. Can I have the app on more than one device? If so, where is it available?

I replied saying that yes, you can absolutely have it on more than one device:

But you don’t find The Session app in the app store. Instead you go to the website https://thesession.org and then add it to your home screen from your browser.

My guess is that this person had added The Session to the home screen of their Android tablet, probably following the “add to home screen” prompt. I recently added some code to use the window.beforeinstallprompt event so that the “add to home screen” prompt would only be shown to visitors who sign up or log in to The Session—a good indicator of engagement, I reckon, and it should reduce the chance of the prompt being dismissed out of hand.

So this person added The Session to their home screen—probably as a result of being prompted—and then used it just like any other app. At some point, they didn’t even remember how the app got installed:

Success! I did it. Thanks. My problem was I was looking for an app to download.

On the one hand, this is kind of great: here’s an example where, in the user’s mind, there’s literally no difference between the experience of using a progressive web app and using a native app. Win!

But on the other hand, the expectation is still that apps are to be found in an app store, not on the web. This expectation is something I wrote about recently (and Justin wrote a response to that post). I finished by saying:

Perhaps the inertia we think we’re battling against isn’t such a problem as long as we give people a fast, reliable, engaging experience.

When this member of The Session said “My problem was I was looking for an app to download”, I responded by saying:

Well, I take that as a compliment—the fact that once the site is added to your home screen, it feels just like a native app. :-)

And they said:

Yes, it does!

The history of design systems at Clearleft

Danielle has posted a brief update on Fractal:

We decided to ask the Fractal community for help, and the response has been overwhelming. We’ve received so many offers of support in all forms that we can safely say that development will be starting up again shortly.

It’s so gratifying to see that other people are finding Fractal to be as useful to them as it is to us. We very much appreciate all their support!

Although Fractal itself is barely two years old, it’s part of a much longer legacy at Clearleft

It all started with Natalie. She gave a presentation back in 2009 called Practical Maintainable CSS . She talks about something called a pattern porfolio—a deliverable that expresses every component and documents how the markup and CSS should be used.

When Anna was interning at Clearleft, she was paired up with Natalie so she was being exposed to these ideas. She then expanded on them, talking about Front-end Style Guides. She literally wrote the book on the topic, and starting curating the fantastic collection of examples at styleguides.io.

When Paul joined Clearleft, it was a perfect fit. He was already obsessed with style guides (like the BBC’s Global Experience Language) and started writing and talking about styleguides for the web:

At Clearleft, rather than deliver an inflexible set of static pages, we present our code as a series of modular components (a ‘pattern portfolio’) that can be assembled into different configurations and page layouts as required.

Such systematic thinking was instigated by Natalie, yet this is something we continually iterate upon.

To see the evolution of Paul’s thinking, you can read his three part series from last year on designing systems:

  1. Theory, Practice, and the Unfortunate In-between,
  2. Layers of Longevity, and
  3. Components and Composition

Later, Charlotte joined Clearleft as a junior developer, and up until that point, hadn’t been exposed to the idea of pattern libraries or design systems. But it soon became clear that she had found her calling. She wrote a brilliant article for A List Apart called From Pages to Patterns: An Exercise for Everyone and she started speaking about design systems at conferences like Beyond Tellerrand. Here, she acknowledges the changing terminology over the years:

Pattern portfolio is a term used by Natalie Downe when she started using the technique at Clearleft back in 2009.

Front-end style guides is another term I’ve heard a lot.

Personally, I don’t think it matters what you call your system as long as it’s appropriate to the project and everyone uses it. Today I’m going to use the term “pattern library”.

(Mark was always a fan of the term “component library”.)

Now Charlotte is a product manager at Ansarada in Sydney and the product she manages is …the design system!

Thinking back to my work on starting design systems, I didn’t realise straight away that I was working on a product. Yet, the questions we ask are similar to those we ask of any product when we start out. We make decisions on things like: design, architecture, tooling, user experience, code, releases, consumption, communication, and more.

It’s been fascinating to watch the evolution of design systems at Clearleft, accompanied by an evolution in language: pattern portfolios; front-end style guides; pattern libraries; design systems.

There’s been a corresponding evolution in prioritisation. Where Natalie was using pattern portfolios as a deliverable for handover, Danielle is now involved in the integration of design systems within a client’s team. The focus on efficiency and consistency that Natalie began is now expressed in terms of design ops—creating living systems that everyone is involved in.

When I step back and look at the history of design systems on the web, there are some obvious names that have really driven their evolution and adoption, like Jina Anne, Brad Frost, and Alla Kholmatova. But I’m amazed at the amount of people who have been through Clearleft’s doors that have contributed so, so much to this field:

Natalie Downe, Anna Debenham, Paul Lloyd, Mark Perkins, Charlotte Jackson, and Danielle Huntrods …thank you all!

Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

The trimCache function in Going Offline

Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.

It’s the trimCache function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName) to a specified amount (maxItems). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:

 function trimCache(cacheName, maxItems) {
   cacheName.open( cache => {
     cache.keys()
     .then( items => {
       if (items.length > maxItems) {
         cache.delete(items[0])
         .then(
           trimCache(cacheName, maxItems)
         ); // end delete then
       } // end if
     }); // end keys then
   }); // end open
 } // end function

See the problem? It’s right there at the start when I try to open the cache like this:

cacheName.open( cache => {

That won’t work. The open method only works on the caches object—I should be passing the name of the cache into the caches.open method. So the code should look like this:

caches.open( cacheName )
.then( cache => {

Everything else remains the same. The corrected trimCache function is here:

function trimCache(cacheName, maxItems) {
  caches.open(cacheName)
  .then( cache => {
    cache.keys()
    .then(items => {
      if (items.length > maxItems) {
        cache.delete(items[0])
        .then(
          trimCache(cacheName, maxItems)
        ); // end delete then
      } // end if
    }); // end keys then
  }); // end open then
} // end function

Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.

You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.

Name That Script! by Trent Walton

Trent is about to pop his AEA cherry and give a talk at An Event Apart in Boston. I’m going to attempt to liveblog this:

How many third-party scripts are loading on our web pages these days? How can we objectively measure the value of these (advertising, a/b testing, analytics, etc.) scripts—considering their impact on web performance, user experience, and business goals? We’ve learned to scrutinize content hierarchy, browser support, and page speed as part of the design and development process. Similarly, Trent will share recent experiences and explore ways to evaluate and discuss the inclusion of 3rd-party scripts.

Trent is going to speak about third-party scripts, which is funny, because a year ago, he never would’ve thought he’d be talking about this. But he realised he needed to pay more attention to:

any request made to an external URL.

Or how about this:

A resource included with a web page that the site owner doesn’t explicitly control.

When you include a third-party script, the third party can change the contents of that script.

Here are some uses:

  • advertising,
  • A/B testing,
  • analytics,
  • social media,
  • content delivery networks,
  • customer interaction,
  • comments,
  • tag managers,
  • fonts.

You get data from things like analytics and A/B testing. You get income from ads. You get content from CDNs.

But Trent has concerns. First and foremost, the user experience effects of poor performance. Also, there are the privacy implications.

Why does Trent—a designer—care about third party scripts? Well, over the years, the areas that Trent pays attention to has expanded. He’s progressed from image comps to frontend to performance to accessibility to design systems to the command line and now to third parties. But Trent has no impact on those third-party scripts. That’s very different to all those other areas.

Trent mostly builds prototypes. Those then get handed over for integration. Sometimes that means hooking it up to a CMS. Sometimes it means adding in analytics and ads. It gets really complex when you throw in third-party comments, payment systems, and A/B testing tools. Oftentimes, those third-party scripts can outweigh all the gains made beforehand. It happens with no discussion. And yet we spent half a meeting discussing a border radius value.

Delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

Trent has spent the last few months learning about third parties so he can be better equiped to discuss them.

UX, performance and privacy impact

We feel the UX impact every day we browse the web (if we turn off our content blockers). The Food Network site has an intersitial asking you to disable your ad blocker. They promise they won’t spawn any pop-up windows. Trent turned his ad blocker off—the page was now 15 megabytes in size. And to top it off …he got a pop up.

Privacy can harder to perceive. We brush aside cookie notifications. What if the wording was “accept trackers” instead of “accept cookies”?

Remarketing is that experience when you’re browsing for a spatula and then every website you visit serves you ads for spatula. That might seem harmless but allowing access to our browsing history has serious privacy implications.

Web builders are on the front lines. It’s up to us to advocate for data protection and privacy like we do for web standards. Don’t wait to be told.

Categories of third parties

Ghostery categories third-party providers: advertising, comments, customer interaction, essential, site analytics, social media. You can dive into each layer and see the specific third-party services on the page you’re viewing.

Analyse and itemise third-party scripts

We have “view source” for learning web development. For third parties, you need some tool to export the data. HAR files (HTTP ARchive) are JSON files that you can create from most browsers’ network request panel in dev tools. But what do you do with a .har file? The site har.tech has plenty of resources for you. That’s where Trent found the Mac app, Charles. It can open .har files. Best of all, you can export to CSV so you can share spreadsheets of the data.

You can visualise third-party requests with Simon Hearne’s excellent Request Map. It’s quite impactful for delivering a visceral reaction in a meeting—so much more effective than just saying “hey, we have a lot of third parties.” Request Map can also export to CSV.

Know industry averages

Trent wanted to know what was “normal.” He decided to analyse HAR files for Alexa’s top 50 US websites. The result was a massive spreadsheet of third-party providers. There were 213 third-party domains (which is not even the same as the number of requests). There was an average of 22 unique third-party domains per site. The usual suspects were everywhere—Google, Amazon, Facebook, Adobe—but there were many others. You can find an alphabetical index on better.fyi/trackers. Often the lesser-known domains turn out to be owned by the bigger domains.

News sites and shopping sites have the most third-party scripts, unsurprisingly.

Understand benefits

Trent realised he needed to listen and understand why third-party scripts are being included. He found out what tag managers do. They’re funnels that allow you to cram even more third-party scripts onto your website. Trent worried that this was a Pandora’s box. The tag manager interface is easy to access and use. But he was told that it’s more like a way of organising your third-party scripts under one dashboard. But still, if you get too focused on the dashboard, you could lose focus of the impact on load times. So don’t blame the tool: it’s all about how it’s used.

Take action

Establish a centre of excellence. Put standards in place—in a cross-discipline way—to define how third-party scripts are evaluated. For example:

  1. Determine the value to the business.
  2. Avoid redundant scripts and services.
  3. Fit within the established performance budget.
  4. Comply with the organistional privacy policy.

Document those decisions, maybe even in your design system.

Also, include third-party scripts within your prototypes to get a more accurate feel for the performance implications.

On a live site, you can regularly audit third-party scripts on a regular basis. Check to see if any are redundant or if they’re exceeding the performance budget. You can monitor performance with tools like Calibre and Speed Curve to cover the time in between audits.

Make your case

Do competitive analysis. Look at other sites in your sector. It’s a compelling way to make a case for change. WPO Stats is very handy for anecdata.

You can gather comparative data with Web Page Test: you can run a full test, and you can run a test with certain third parties blocked. Use the results to kick off a discussion about the impact of those third parties.

Talk it out

Work to maintain an ongoing discussion with the entire team. As Tim Kadlec says:

Everything should have a value, because everything has a cost.

Building More Expressive Products by Val Head

It’s day two of An Event Apart in Boston and Val is giving a new talk about building expressive products:

The products we design today must connect with customers across different screen sizes, contexts, and even voice or chat interfaces. As such, we create emotional expressiveness in our products not only through visual design and language choices, but also through design details such as how interface elements move, or the way they sound. By using every tool at our disposal, including audio and animation, we can create more expressive products that feel cohesive across all of today’s diverse media and social contexts. In this session, Val will show how to harness the design details from different media to build overarching themes—themes that persist across all screen sizes and user and interface contexts, creating a bigger emotional impact and connection with your audience.

I’m going to attempt to live blog her talk. Here goes…

This is about products that intentionally express personality. When you know what your product’s personality is, you can line up your design choices to express that personality intentionally (as opposed to leaving it to chance).

Tunnel Bear has a theme around a giant bear that will product you from all the bad things on the internet. It makes a technical product very friendly—very different from most VPN companies.

Mailchimp have been doing this for years, but with a monkey (ape, actually, Val), not a bear—Freddie. They’ve evolved and changed it over time, but it always has personality.

But you don’t need a cute animal to express personality. Authentic Weather is a sarcastic weather app. It’s quite sweary and that stands out. They use copy, bold colours, and giant type.

Personality can be more subtle, like with Stripe. They use slick animations and clear, concise design.

Being expressive means conveying personality through design. Type, colour, copy, layout, motion, and sound can all express personality. Val is going to focus on the last two: motion and sound.

Expressing personality with motion

Animation can be used to tell your story. We can do that through:

  • Easing choices (ease-in, ease-out, bounce, etc.),
  • Duration values, and offsets,
  • The properties we animate.

Here are four personality types…

Calm, soft, reassuring

You can use opacity, soft blurs, small movements, and easing curves with gradual changes. You can use:

  • fade,
  • scale + fade,
  • blur + fade,
  • blur + scale + fade.

Pro tip for blurs: the end of blurs always looks weird. Fade out with opacity before your blur gets weird.

You can use Penner easing equations to do your easings. See them in action on easings.net. They’re motion graphs plotting animation against time. The flatter the curve, the more linear the motion. They have a lot more range than the defaults you get with CSS keyword values.

For calm, soft, and reassuring, you could use easeInQuad, easeOutQuad, or easeInOutQuad. But that’s like saying “you could use dark blue.” These will get you close, but you need to work on the detail.

Confident, stable, strong

You can use direct movements, straight lines, symmetrical ease-in-outs. You should avoid blurs, bounces, and overshoots. You can use:

  • quick fade,
  • scale + fade,
  • direct start and stops.

You can use Penner equations like easeInCubic, easeOutCubic and easeInOutCubic.

Lively, energetic, friendly

You can use overshoots, anticipation, and “snappy” easing curves. You can use:

  • overshoot,
  • overshoot + scale,
  • anticipation,
  • anticipation + overshoot

To get the sense of overshoots and anticipations you can use easing curves like easeInBack, easeOutBack, and easeInOutBack. Those aren’t the only ones though. Anything that sticks out the bottom of the graph will give you anticipation. Anything that sticks out the top of the graph will give you overshoot.

If cubic bezier curves don’t get you quite what you’re going for, you can add keyframes to your animation. You could have keyframes for: 0%, 90%, and 100% where the 90% point is past the 100% point.

Stripe uses a touch of overshoot on their charts and diagrams; nice and subtle. Slack uses a bit of overshoot to create a sense of friendliness in their loader.

Playful, fun, lighthearted

You can use bounces, shape morphs, squashes and stretches. This is probably not the personality for a bank. But it could be for a game, or some other playful product. You can use:

  • bounce,
  • elastic,
  • morph,
  • squash and stretch (springs.

You can use easing equations for the first two, but for the others, they’re really hard to pull off with just CSS. You probably need JavaScript.

The easing curve for elastic movement is more complicated Penner equation that can’t be done in CSS. GreenSock will help you visual your elastic easings. For springs, you probably need a dedicated library for spring motions.

Expressing personality with sound

We don’t talk about sound much in web design. There are old angry blog posts about it. And not every website should use sound. But why don’t we even consider it on the web?

We were burnt by those terrible Flash sites with sound on every single button mouseover. And yet the Facebook native app does that today …but in a much more subtle way. The volume is mixed lower, and the sound is flatter; more like a haptic feel. And there’s more variation in the sounds. Just because we did sound badly in the past doesn’t mean we can’t do it well today.

People say they don’t want their computers making sound in an office environment. But isn’t responsive design all about how we don’t just use websites on our desktop computers?

Amber Case has a terrific book about designing products with sound, and she’s all about calm technology. She points out that the larger the display, the less important auditive and tactile feedback becomes. But on smaller screens, the need increases. Maybe that’s why we’re fine with mobile apps making sound but not with our desktop computers doing it?

People say that sound is annoying. That’s like saying siblings are annoying. Sound is annoying when it’s:

  • not appropriate for the situation,
  • played at the wrong time,
  • too loud,
  • lacks user control.

But all of those are design decisions that we can control.

So what can we do with sound?

Sound can enhance what we perceive from animation. The “breathe” mode in the Calm meditation app has some lovely animation, and some great sound to go with it. The animation is just a circle getting smaller and bigger—if you took the sound away, it wouldn’t be very impressive.

Sound can also set a mood. Sirin Labs has an extreme example for the Solarin device with futuristic sounds. It’s quite reminiscent of the Flash days, but now it’s all done with browser technologies.

Sound is a powerful brand differentiator. Val now plays sounds (without visuals) from:

  • Slack,
  • Outlook Calendar.

They have strong associations for us. These are earcons: icons for the ears. They can be designed to provoke specific emotions. There was a great explanation on the Blackberry website, of all places (they had a whole design system around their earcons).

Here are some uses of sounds…

Alerts and notifications

You have a new message. You have new email. Your timer is up. You might not be looking at the screen, waiting for those events.

Navigating space

Apple TV has layers of menus. You go “in” and “out” of the layers. As you travel “in” and “out”, the animation is reinforced with sound—an “in” sound and an “out” sound.

Confirming actions

When you buy with Apple Pay, you get auditory feedback. Twitter uses sound for the “pull to refresh” action. It gives you confirmation in a tactile way.

Marking positive moments

This is a great way of making a positive impact in your user’s minds—celebrate the accomplishments. Clear—by Realmac software—gives lovely rising auditory feedback as you tick things off your to-do list. Compare that to hardware products that only make sounds when something goes wrong—they don’t celebrate your accomplishments.

Here are some best practices for user interface sounds:

  • UI sounds be short, less than 400ms.
  • End on an ascending interval for positive feedback or beginnings.
  • End on a descending interval for negative feedback, ending, or closing.
  • Give the user controls to top or customise the sound.

When it comes to being expressive with sounds, different intervals can evoke different emotions:

  • Consonant intervals feel pleasant and positive.
  • Dissonant intervals feel strong, active, or negative.
  • Large intervals feel powerful.
  • Octaves convey lightheartedness.

People have made sounds for you if you don’t want to design your own. Octave is a free library of UI sounds. You can buy sounds from motionsound.io, targetted specifically at sounds to go with motions.

Let’s wrap up by exploring where to find your product’s personality:

  • What is it trying to help users accomplish?
  • What is it like? (its mood and disposition)

You can workshops to answer these questions. You can also do research with your users. You might have one idea about your product’s personality that’s different to your customer’s. You need to project a believable personality. Talk to your customers.

Designing for Emotion has some great exercises for finding personality. Conversational Design also has some great exercises in it. Once you have the words to describe your personality, it gets easier to design for it.

So have a think about using motion and sound to express your product’s personality. Be intentional about it. It will also make the web a more interesting place.

Why Design Systems Fail by Una Kravets

I’m at An Event Apart Boston where Una is about to talk about design systems, following on from Yesenia’s excellent design systems talk. I’m going to attempt to liveblog it…

Una works at the Bustle Digital Group, which publishes a lot of different properties. She used to work at Watson, at Bluemix and at Digital Ocean. They all have something in common (other than having blue in their logos). They all had design systems that failed.

Design systems are so hot right now. They allow us think in a componentised way, and grow quickly. There are plenty of examples out there, like Polaris from Shopify, the Lightning design sytem from Salesforce, Garden from Zendesk, Gov.uk, and Code For America. Check out Anna’s excellent styleguides.io for more examples.

What exactly is a design system?

It’s a broad term. It can be a styleguide or visual pattern library. It can be design tooling (like a Sketch file). It can be a component library. It can be documentation of design or development usage. It can be voice and tone guidelines.

Styleguides

When Una was in College, she had a print rebranding job—letterheads, stationary, etc. She also had to provide design guidelines. She put this design guide on the web. It had colours, heading levels, type, logo treatments, and so on. It wasn’t for an application, but it was a design system.

Design tooling

Primer by Github is a good example of this. You can download pre-made icons, colours, etc.

Component library

This is where you get the code. Una worked on BUI at Digital Ocean, which described the interaction states of each components and how to customise interactions with JavaScript.

Code usage guidelines

AirBnB has a really good example of this. It’s a consistent code style. You can even include it in your build step with eslint-config-airbnb and stylelint-config-airbnb.

Design usage documentation

Carbon by IBM does a great job of this. It describes the criteria for deciding when to use a pattern. It’s driven by user experience considerations. They also have general guidelines on loading in components—empty states, etc. And they include animation guidelines (separately from Carbon), built on the history of IBM’s magnetic tape machines and typewriters.

Voice and tone guidelines

Of course Mailchimp is the classic example here. They break up voice and tone. Voice is not just what the company is, but what the company is not:

  • Fun but not silly,
  • Confident but not cocky,
  • Smart but not stodgy,
  • and so on.

Voiceandtone.com describes the user’s feelings at different points and how to communicate with them. There are guidelines for app users, and guidelines for readers of the company newsletter, and guidelines for readers of the blog, and so on. They even have examples of when things go wrong. The guidelines provide tips on how to help people effectively.

Why do design systems fail?

Una now asks who in the room has ever started a diet. And who has ever finished a diet? (A lot of hands go down).

Nobody uses it

At Digital Ocean, there was a design system called Buoy version 1. Una helped build a design system called Float. There was also a BUI version 2. Buoy was for product, Float was for the marketing site. Classic example of 927. Nobody was using them.

Una checked the CSS of the final output and the design system code only accounted for 28% of the codebase. Most of the CSS was over-riding the CSS in the design system.

Happy design systems scale good standards, unify component styles and code and reduce code cruft. Why were people adding on instead of using the existing sytem? Because everyone was being judged on different metrics. Some teams were judged on shipping features rather than producing clean code. So the advantages of a happy design systems don’t apply to them.

Investment

It’s like going to the gym. Small incremental changes make a big difference over the long term. If you just work out for three months and then stop, you’ll lose all your progress. It’s like that with design systems. They have to stay in sync with the live site. If you don’t keep it up to date, people just won’t use it.

It’s really important to have a solid core. Accessibility needs to be built in from the start. And the design system needs ownership and dedicated commitment. That has to come from the organisation.

You have to start somewhere.

Communication

Communication is multidimensional; it’s not one-way. The design system owner (or team) needs to act as a bridge between designers and developers. Nobody likes to be told what to do. People need to be involved, and feel like their needs are being addressed. Make people feel like they have control over the process …even if they don’t; it’s like perceived performance—this is perceived involvement.

Ask. Listen. Make your users feel heard. Incorporate feedback.

Buy-in

Good communication is important for getting buy-in from the people who will use the design system. You also need buy-in from the product owners.

Showing is more powerful than telling. Hackathans are like candy to a budding design system—a chance to demonstrate the benefits of a design system (and get feedback). After a hackathon at Digital Ocean, everyone was talking about the design system. Weeks afterwards, one of the developers replaced Bootstrap with BUI, removing 20,000 lines of code! After seeing the impact of a design system, the developers will tell their co-workers all about it.

Solid architecture

You need to build with composability and change in mind. Primer, by Github, has a core package, and then add-ons for, say, marketing or product. That separation of concerns is great. BUI used a similar module-based approach: a core codebase, separate from iconography and grid.

Semantic versioning is another important part of having a solid architecture for your design system. You want to be able to push out minor updates without worrying about breaking changes.

Major.Minor.Patch

Use the same convention in your design files, like Sketch.

What about tech stack choice? Every company has different needs, but one thing Una recommends is: don’t wait to namespace! All your components should have some kind of prefix in the class names so they don’t clash with existing CSS.

Una mentions Solid by Buzzfeed, which I personally think is dreadful (count the number of !important declarations—you can call it “immutable” all you want).

AtlasKit by Atlassian goes all in on React. They’re trying to integrate Sketch into it, but design tooling isn’t solved yet (AirBnB are working on this too). We’re still trying to figure out how to merge the worlds of design and code.

Reduce Friction

This is what it’s all about. Using the design system has to be the path of least resistance. If the new design system is harder to use than what people are already doing, they won’t use it.

Provide hooks and tools for the people who will be using the design system. That might be mixins in Sass or it might be a script on a CDN that people can just link to.

Start early, update often. Design systems can be built retrospectively but it’s easier to do it when a new product is being built.

Bugs and cruft always increase over time. You need a mechanism in place to keep on top of it. Not just technical bugs, but visual inconsistencies.

So the five pillars of ensuring a successful design system are:

  1. Investment
  2. Communication
  3. Buy-in
  4. Solid architecture
  5. Reduce friction

When you’re starting, begin with a goal:

We are building a design system because…

Then review what you’ve already got (your existing codebase). For example, if the goal of having a design system is to increase page performance, use Web Page Test to measure how the current site is performing. If the goal is to reduce accessibility problems, use webaim.org to measure the accessibility of your current site (see also: pa11y). If the goal is to reduce the amount of CSS in your codebase, use cssstats.com to test how your current site is doing. Now that you’ve got stats, use them to get buy-in. You can also start by doing an interface inventory. Print out pages and cut them up.

Once you’ve got buy-in and commitment (in writing), then you can make technical decisions.

You can start with your atomic elements. Buttons are like the “Hello world!” of design systems. You’ve colours, type, and different states.

Then you can compose elements by putting the base elements together.

Do you include layout in the system? That’s a challenge, and it depends on your team. If you do include layout, to what extent?

Regardless of layout, you still need to think about space: the space between base elements within a component.

Bake in accessibility: every hover state should have an equal (not opposite) focus state.

Think about states, like loading states.

Then you can start documenting. Then inform the users of the system. Carbon has a dashboard showing which components are new, which components are deprecated, and which components are being updated.

Keep consistent communication. Design and dev communication has to happen. Continuous iteration, support and communication are the most important factors in the success of a design system. Code is only 10% of a sytem.

Also, don’t feel like you need to copy other design systems out there. Your needs are probably very different. As Diana says, comparing your design system to the polished public ones is like comparing your life to someone’s Instagram account. To that end, Una says something potentially contraversial:

You might not need a design sytem.

If you’re the only one at your organisation that cares about the benefits of a design system, you won’t get buy-in, and if you don’t get buy-in, the design system will fail. Maybe there’s something more appropriate for your team? After all, not everyone needs to go to the gym to get fit. There are alternatives.

Find what works for you and keep at it.

New tools for art direction on the web

I’m in Boston right now, getting ready to speak at An Event Apart. This will be my second (and last) Event Apart of the year—the other time was in Seattle back in April. After that event, I wrote about how inspired I was:

It was interesting to see repeating, overlapping themes. From a purely technical perspective, three technologies that were front and centre were:

  • CSS grid,
  • variable fonts, and
  • service workers.

From listening to other attendees, the overwhelming message received was “These technologies are here—they’ve arrived.”

I was itching to combine those technologies on a project. Coincidentally, it was around that time that I started planning to publish The Gęsiówka Story. I figured I could use that as an opportunity to tinker with those front-end technologies that I was so excited about.

But I was cautious. I didn’t want to use the latest exciting technology just for the sake of it. I was very aware of the gravity of the material I was dealing with. Documenting the story of Gęsiówka was what mattered. Any front-end technologies I used had to be in support of that.

First of all, there was the typesetting. I don’t know about you, but I find choosing the right typefaces to be overwhelming. Despite all the great tips and techniques out there for choosing and pairing typefaces, I still find myself agonising over the choice—what if there’s a better choice that I’m missing?

In this case, because I wanted to use a variable font, I had a constraint that helped reduce the possibility space. I started to comb through v-fonts.com to find a suitable typeface—I was fairly sure I wanted a serious serif.

I had one other constraint. The font file had to include English, Polish, and German glyphs. That pretty much sealed the deal for Source Serif. That only has one variable axis—weight—but I decided that this could also be an interesting constraint: how much could I wrangle out of a single typeface just using various weights?

I ended up using font weights of 75, 250, 315, 325, 340, 350, 400, and 525. Most of them were for headings or one-off uses, with a font-weight of 315 for the body copy.

(And can I just say once again how impressed I am that the founding fathers of CSS were far-sighted enough to keep those font weight ranges free for future use?)

Getting the typography right posed an interesting challenge. This was a fairly long piece of writing, so it really needed to be readable without getting tiring. But at the same time, I didn’t want it to be exactly pleasant to read—that wouldn’t do the subject matter justice. I wanted the reader to feel the seriousness of the story they were reading, without being fatigued by its weight.

Colour and type went a long way to communicating that feeling. The grid sealed the deal.

The Gęsiówka Story is mostly one single column of text, so on the face of it, there isn’t much opportunity to go crazy with CSS Grid. But I realised I could use a grid to create a winding effect for the text. I had to be careful though: I didn’t want it to become uncomfortable to read. I wanted to create a slightly unsettling effect.

Every section element is turned into a seven-column grid container:

section {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

The first and last columns are the same width as the gutters (2em), effectively creating “outer” gutters for the grid. Each paragraph within the section takes up six of the seven columns. I use nth-of-type to alternate which six columns are used (the first six or the last six). That creates the staggered indendation:

section > p {
    grid-column: 1/7;
}
section > p:nth-of-type(even) {
    grid-column: 2/8;
}

Staggered grid.

That might seem like overkill just to indent every second paragraph by 4em, but I then used the same grid dimensions to layout figure elements with images and captions.

section > figure {
    display: grid;
    grid-column-gap: 2em;
    grid-template-columns: 2em repeat(5, 1fr) 2em;
}

Then I can lay out differently proportioned images across different ranges of the grid:

section > figure.landscape > img {
    grid-column: 1/5;
}
section > figure.landscape > figcaption {
    grid-column: 5/8;
}
section > figure.portrait > img {
    grid-column: 1/4;
}
section > figure.portrait > figcaption {
    grid-column: 4/8;
}

Because they’re positioned on the same grid as the paragraphs, everything lines up nicely (and yes, if subgrid existed, I wouldn’t have to redeclare the grid dimensions for the figures).

Finally, I wanted to make sure that the whole thing could be read offline. After all, once you’ve visited the URL once, there’s really no reason to make any more requests to the server. Static documents—and books—are the perfect candidates for an “offline first” approach: always look in the cache, and only go to the network as a last resort.

In this case I used a variation of my minimal viable service worker script, and the result is a very short set of instructions. There’s a little bit of pre-caching going on: I grab the variable font and the HTML page itself (which includes the CSS inlined).

So there you have it: variable fonts, CSS grid, and service workers: three exciting front-end technologies, all of which can be applied as progressive enhancements on top of the core content.

Once again, I find that it’s personal projects that offer the most opportunities to try out new or interesting techniques. And The Gęsiówka Story is a very personal project indeed.

Praise for Going Offline

I’m very, very happy to see that my new book Going Offline is proving to be accessible and unintimidating to a wide audience—that was very much my goal when writing it.

People have been saying nice things on their blogs, which is very gratifying. It’s even more gratifying to see people use the knowledge gained from reading the book to turn those blogs into progressive web apps!

Sara Soueidan:

It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.

Eric Lawrence:

I was delighted to discover a straightforward, very approachable reference on designing a ServiceWorker-backed application: Going Offline by Jeremy Keith. The book is short (I’m busy), direct (“Here’s a problem, here’s how to solve it“), opinionated in the best way (landmine-avoiding “Do this“), and humorous without being confusing. As anyone who has received unsolicited (or solicited) feedback from me about their book knows, I’m an extremely picky reader, and I have no significant complaints on this one. Highly recommended.

Ben Nadel:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic.

Daniel Koskine:

Jeremy nails it again with this beginner-friendly introduction to Service Workers and Progressive Web Apps.

Donny Truong

Jeremy’s technical writing is as superb as always. Similar to his first book for A Book Apart, which cleared up all my confusions about HTML5, Going Offline helps me put the pieces of the service workers’ puzzle together.

People have been saying nice things on Twitter too…

Aaron Gustafson:

It’s a fantastic read and a simple primer for getting Service Workers up and running on your site.

Ethan Marcotte:

Of course, if you’re looking to take your website offline, you should read @adactio’s wonderful book

Lívia De Paula Labate:

Ok, I’m done reading @adactio’s Going Offline book and as my wife would say, it’s the bomb dot com.

If that all sounds good to you, get yourself a copy of Going Offline in paperbook, or ebook (or both).

Detecting image requests in service workers

In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog or /articles differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.

One of the ways to check what kind of request you’re dealing with is to see what’s in the accept header. Here’s how I show the test for HTML pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Handle your page requests here.
}

So, logically enough, I show the same technique for detecting image requests:

if (request.headers.get('Accept').includes('image')) {
    // Handle your image requests here.
}

That should catch any files that have image in the request’s accept header, like image/png or image/jpeg or image/svg+xml and so on.

But there’s a problem. Both Safari and Firefox now use a much broader accept header: */*

My if statement evaluates to false in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
    // Handle your image requests here.
}

So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:

if (request.headers.get('Accept').includes('image'))

Swap it out for:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/))

And feel to add any other image file extensions (like webp) in there too.

Registering service workers

In chapter two of Going Offline, I talk about registering your service worker wrapped up in some feature detection:

<script>
if (navigator.serviceworker) {
  navigator.serviceworker.register('/serviceworker.js');
}
</script>

But I also make reference to a declarative way of doing this that isn’t very widely supported:

<link rel="serviceworker" href="/serviceworker.js">

No need for feature detection there. Thanks to the liberal error-handling model of HTML (and CSS), browsers will just ignore what they don’t understand, which isn’t the case with JavaScript.

Alas, it looks like that nice declarative alternative isn’t going to be making its way into browsers anytime soon. It has been removed from the HTML spec. That’s a shame. I have a preference for declarative solutions where possible—they’re certainly easier to teach. But in this case, the JavaScript alternative isn’t too onerous.

So if you’re reading Going Offline, when you get to the bit about someday using the rel value, you can cast a wistful gaze into the distance, or shed a tiny tear for what might have been …and then put it out of your mind and carry on reading.

Clearleft.com is a progressive web app

What’s that old saying? The cobbler’s children have no shoes that work offline. Or something.

It’s been over a year since the Clearleft site relaunched and I listed some of the next steps I had planned:

Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step.

You know how it is. Those no-brainer tasks are exactly the kind of thing that end up on a to-do list without ever quite getting to-done. Meanwhile I’ve been writing and speaking about how any website can be a progressive web app. I think Alanis Morissette used to sing about this sort of situation.

Enough is enough! Clearleft.com is now a progressive web app. It has a manifest file and a service worker script.

The service worker logic is fairly straightforward, and taken almost verbatim from Going Offline. As you navigate around the site, the service worker applies different logic depending on the kind of file you’re requesting:

  • Pages are served fresh from the network, falling back to the cache when there’s a problem.
  • Everything else is served from the cache where possible, resorting to the network only if there’s no match in the cache—quite the performance boost!

In both cases, if a page or a file is retrieved from the network, it’s gets put into a cache. I’ve got one cache for pages, and another for everything else. And even if a file is retrieved from that cache, I still fire off a fetch request to grab a fresh copy for the cache. So while there’s a chance that a stale file might be served up, it will only ever be slightly stale, and the next time it’s requested, it’ll be fresh.

In the worst-case scenario, when a page can’t be retrieved from the network or the cache, you end up seeing a custom offline page. There you can see a list of any pages that are cached (meaning you can revisit them even without an internet connection).

A custom offline page showing a list of URLs.

It’s not ideal—page titles would be friendlier than URLs—but it’s a start. I’m sure I’ll revisit it soon. Honest.

Oh, and after a year of procrastinating about doing this, guess how long it took? About half a day. Admittedly, this isn’t my first progressive web app, and the more you build ‘em, the easier it gets. Still, it’s a classic example of a small investment of time leading to a big improvement in performance and user experience.

If you think your company’s website could benefit from being a progressive web app (and believe me, it definitely could), you have a couple of options:

  1. Arm yourself with a copy of Going Offline and give it a go yourself. Or
  2. Get in touch with Clearleft. We can help you. (See, I can say that with a straight face now that we’re practicing what we preach.)

Either way, don’t dilly dally …like I did.

Expectations

I noticed something interesting recently about how I browse the web.

It used to be that I would notice if a site were responsive. Or, before responsive web design was a thing, I would notice if a site was built with a fluid layout. It was worthy of remark, because it was exceptional—the default was fixed-width layouts.

But now, that has flipped completely around. Now I notice if a site isn’t responsive. It feels …broken. It’s like coming across an embedded map that isn’t a slippy map. My expectations have reversed.

That’s kind of amazing. If you had told me ten years ago that liquid layouts and media queries would become standard practice on the web, I would’ve found it very hard to believe. I spent the first decade of this century ranting in the wilderness about how the web was a flexible medium, but I felt like the laughable guy on the street corner with an apocalyptic sandwich board. Well, who’s laughing now

Anyway, I think it’s worth stepping back every now and then and taking stock of how far we’ve come. Mind you, in terms of web performance, the trend has unfortunately been in the wrong direction—big, bloated websites have become the norm. We need to change that.

Now, maybe it’s because I’ve been somewhat obsessed with service workers lately, but I’ve started to notice my expectations around offline behaviour changing recently too. It’s not that I’m surprised when I can’t revisit an article without an internet connection, but I do feel disappointed—like an opportunity has been missed.

I really notice it when I come across little self-contained browser-based games like

Those games are great! I particularly love Battleship Solitaire—it has a zen-like addictive quality to it. If I load it up in a browser tab, I can then safely go offline because the whole game is delivered in the initial download. But if I try to navigate to the game while I’m offline, I’m out of luck. That’s a shame. This snack-sized casual games feel like the perfect use-case for working offline (or, even if there is an internet connection, they could still be speedily served up from a cache).

I know that my expectations about offline behaviour aren’t shared by most people. The idea of visiting a site even when there’s no internet connection doesn’t feel normal …yet.

But perhaps that expectation will change. It’s happened before.

(And if you want to be ready when those expectations change, I’ve written a Going Offline for you.)