Tags: frontend

144

sparkline

Designing Intrinsic Layouts by Jen Simmons

Alright, it’s time for the final talk of day one of An Event Apart in Seattle. The trifecta of CSS talks is going to finish with Jen Simmons talking about Designing Intrinsic Layouts. Here’s the description:

Twenty-five years after the web began, we finally have a real toolkit for creating layouts. Combining CSS Grid, Flexbox, Multicolumn, Flow layout and Writing Modes gives us the technical ability to build layouts today without the horrible hacks and compromises of the past. But what does this mean for our design medium? How might we better leverage the art of graphic design? How will we create something practical, useable, and realistically doable?

In a talk full of specific examples, Jen will walk you through the thinking process of creating accessible & reusable page and component layouts. For the last four years, Jen’s been getting audiences excited about what, when, and why. Now it’s time for how.

I’m not sure if live-blogging is going to work given the visual nature of this talk, but I’ll give it a try…

How many of us have written CSS using display: grid? Quite a few. How many people feel they have a good grip on it? Not so many.

Jen has spent the last few years encouraging people to really push the boundaries of graphic design on the web now that we have the tools to do so in CSS. But Jen is not here today to talk about amazing new things. Instead she’s going to show the “how?” The code is on labs.jensimmons.com.

Let’s look at laying out a header. You might have a header element with a logo image, the site name in an h1, and a nav element with the navigation. The logo, the site name, and the navigation are direct children of the parent header. By default we get everything stacked vertically.

<header role="banner">
    <img class="logo" src="..." alt="...">
    <h1>Site title</h1>
    <nav>
        <ul>
            <li><a href="...">Home</a></li>
            <li><a href="...">Episodes</a></li>
            <li><a href="...">Guests</a></li>
            <li><a href="...">Subscribe</a></li>
            <li><a href="...">About</a></li>
        </ul>
    </nav>
</header>

Why should we care about starting with semantic HTML? It matters for reusability and accessibility, but also for reader modes in browsers. These tools remove the cruft. If you mark up your content well, it will play nicely with reader modes. Interestingly, there are no metrics around how many people are using reader modes (by design). Mozilla has a product called Pocket that’s a “read later” app. It can also turn saved articles into a podcast for you. Well marked up content matters for audio playback.

Now let’s start applying some CSS. Fonts, colours, stuff like that. Everything is still stacking vertically though. It’s flow content. This can be our fallback layout. Now let’s apply our own layout. We could use float: left on the logo. Now we need some margins. We can try applying widths and floats to the h1 and the nav. Now we need a clearfix to get the parent to stretch to the full height of the content. It’s hacky. floats suck. But it’s all we had until we got flexbox. But even using flexbox for this kind of layout is a hack too. What we really need is CSS grid.

Apply display: grid to the header. Use Firefox’s dev tools to inspect the grid. Seeing the grid helps in understanding what’s going on. We’ve got three grid items in three separate rows. Notice that we don’t have margin collapsing any more. We can get rid of margins and use grid-gap instead. But we want is three columns, not three rows. We’ll try:

grid-template-columns: 1fr 1fr 1fr;

Looks okay. Not exactly the spacing we want though. We want the logo and the navigation to take up less space than the site name. We could translate our old percentage values into fr equivalents. 8% becomes 8fr. 75% becomes 75fr. 17% becomes 17fr. But the logo and the nav never shrink below their actual size. Layout isn’t working like we’re used to. The content will never get smaller than the minimum content size. So the amount of space assigned to each column is no longer linear.

Let’s change those fr units to percentages. Now we need to get rid of our gaps. But now when the layout gets small, everything squishes up. This is what we need breakpoints for. But now we can do something else. Let’s make the logo max-content. That column will now be the size of the logo. The other columns can remain `fr:

header {
    grid-template columns: max-content 3fr 1fr;
}

Let’s also define the last one—the navigation—as max-content. We can toss auto in for the middle content (the site name):

header {
    grid-template columns: max-content auto max-content;    }

Let’s layout the navigation horizontally. The best tool for this is flexbox. The li elements are the flex items. So the ul needs to be the flex container.

ul {
    display: flex;
}

Looking good. Let’s allow items to wrap onto another line:

ul {
    display: flex;
    flex-wrap: wrap;
}

Let’s change that navigation from max-content to auto so that it doesn’t get too long:

header {
    grid-template columns: max-content auto auto;   }

(How you want the navigation items to wrap determines whether you use flexbox or grid. Both are perfectly valid choices. There’s nothing wrong with using grid for the small stuff.)

Just look at how little code we need for this layout!

Let’s try a different layout. We’ll put the navigation on a new row.

header {
    grid-template-columns: min-content auto;
    grid-template-rows: auto auto
}

Let’s get the logo to span across both rows:

.logo {
    row-span: 2;
}

Need to finesse the alignment of the navigation? No problem. Play with align-self property on the nav element.

Again, look at how little code this is!

But these layouts are safe. We need to break out of our habits. What about disjointed text? Let’s take the h1. Apply some typography and colour. Also apply overflow-wrap: anywhere. Now the text can break within words.

We can take this further. But to do that, we have to wrap all of my letters in separate spans. Yuck!

Apply display: grid to the h1 that contains all those span children. Say grid-template-columns: repeat(8, 1fr) to make an eight-column repeating grid.

Let’s make it more interesting. We can target each individual letter with grid-column and grid-row. There are many ways to tell the browser where to place grid items. We can tell them the start lines. By default it will take up one cell.

Let’s add some images. Let’s rotate items. Place items wherever we want them. Mess around with the units to see what happens. Play with opacity when elements overlap. See the possibilities!

But, people cry, what about Internet Explorer? Use @supports

/* code for every browser */

@supports (display: grid) {
    /* code for modern browsers */
}

Set up a fallback outside the @supports block. Toggle grid on and enough in dev tools to see how the fallback will look. If this kind of thinking is new to you, please watch youtube.com/layoutland where Jen talks about resilient CSS.

Let’s try something else. Jen got an email announcement for an event. It had an interesting bit of layout with some text overlapping an image.

Mark up the content: some headings and images. By default the images are displayed inline. The headings are displayed as blocks.

Think about where your grid lines will need to go. How many lines will you need? How about setting your columns with 1fr 3fr 3fr 1fr? Now how many rows do you need? You define and the grid on the container and tell the items where to go. Again, it’s not much code. Tweak it. How about setting the columns to be 1fr minmax(100px, 400px) min-content? You have to mess around to see what’s possible. You can use all sorts of units for columns: fixed lengths (pixels, ems, rems), min-content, max-content, percentages, fr units, minmax(), and auto. Play around with the combinations.

Jen shows a whole bunch of her demos. Check them out. Use web inspector to play around with them.

And with that, the first day of An Event Apart Seattle is done!

Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew

The CSStastic afternoon of day one of An Event Apart in Seattle continues with Rachel Andrew. Her talk is Making Things Better: Redefining the Technical Possibilities of CSS. The description reads:

For years we’ve explained that the web is not like print; that a particular idea is not how things work on the web; that certain things are simply not possible. Over the last few years, rapid browser implementation of advances in CSS have given us the ability to do many of these previously impossible things. We can use our new powers to build the same designs faster, or we can start to ask ourselves what we might do if we were solving these problems afresh.

In this talk, Rachel will look at the things coming into browsers right now which change the way we see web design. CSS subgrids allowing nested grids to use the track definition of their parent; logical properties and values moving the web away from the physical dimensions of a computer screen; screen experiences which behave more like an app, or even paged media, due to scroll snapping and multidimensional control. By understanding the new medium of web design we can start to imagine the future, and even help to shape it.

I’m not sure if it even makes sense to try to live-blog a code talk, but I’ll give it my best shot…

Rachel has been talking about CSS at An Event Apart for over three years now. Our understanding of the possibilities of CSS has changed a lot in that time. Our use of floats for layout is being consigned to history. It’s no less monumental than the change from tables to CSS. Tableless web design often meant simplifying our designs. We were used to designing in a graphic design tool and then slicing it up into table cells. CSS couldn’t give us the same fine-grained control so we simplified our designs. It got us to start thinking of the web as its own medium. That idea really progressed with responsive web design.

But perhaps us CSS advocates downplayed some of the issues. We weren’t trying to create new CSS, we were just trying to get people to use CSS.

What we term “good web design” is based in the technical limitations of CSS. We say “the web is not print” when we see a design that’s quite print-like. People expect to have to hack at CSS to get it to do what we want. But times have changed. We have solved many of those problems (but that doesn’t mean we got all of them!).

Rachel spends a lot of time telling designers: you never know how tall anything is on the web. It used to be a real challenge to get the top and bottom of boxes to line up. We’d have to fix the height of the boxes. And if too much content goes in, the content overflows. Then we end up limiting the amount of content at the CMS side. We hacked around the problem. A technical limitation influenced our design, and even our content management. Then we got flexbox. Not only did the problem disappear, but the default behaviour is exactly what we struggled with for years: equal height columns.

How big is this box? You’ve seen the “CSS is awesome mug”, right?

Our previous layout systems relied on percentage lengths for widths. Those values had to add up to 100%, and no more. People tried to do the same thing with flexbox. People made “grid systems” with flexbox that gave widths to everything. “Flexbox is weird!”, people said. But the real problem is that flexbox is not the layout method you think it is. It’s for taking a bunch of oddly-sized things and returning the most reasonable layout for those things. It assigns space in a smart way. That solves the problem of needing to give everthing a width. It figures it out for you. If you decide to put widths on everything, you’re kind of working against flexbox.

We’re so used to having to hack everything in CSS, we had to take a step back with flexbox and realise that hacks aren’t necessary.

CSS tries to avoid data loss. That’s why the “CSS is awesome” text overflows the box. You don’t want the text to vanish. Visible overflow is messy, but it’s better than making some content disappear.

In the box alignment specification, there’s the concept of safe and unsafe alignment. Safe:

.container {
    display: flex;
    flex-direction: column;
    align-items: safe center;
}

You give the browser permission to align items to the start if necessary. But you can override that with unsafe:

.container {
    display: flex;
    flex-direction: column;
    align-items: unsafe center;
}

Overflow is going to happen. Now it’s up to you what happens when it does.

The “content honking out of the box” problem described in the “CSS is awesome” meme is now controlled with min-content:

width: min-content;

The box expands to encompass the widest content.

You have many choices. But what if the text isn’t running left to right? It might not be a problem we run into for English text. For years, CSS had that English-centric assumption baked in. Now CSS has been updated to not make that assumption. The web is not left to right. Flexbox and grid take an agnostic approach to the writing mode of the document. There’s no “top”, “bottom”, “left” or “right”. There’s “start”, “end”, “inline” and “block”. Now we have a new spec for logical properties and values. It maps old physical values (top, bottom, etc.) to the newer agnostic values.

So even if you use writing-mode to flip direction, width is still a width. Use inline-size instead of width and everything keeps working: the width maps to height when you apply a different writing-mode value. Eventually we’ll use those flow-relative values more than the old values. Solutions need to include different writing modes.

There is no fold. We’ve said that for years, right? But we know where the fold is. We’ve got viewport units that represent the width and height of the browser viewport. We can start to make designs that make use of the viewport. You can size a screen full of images exactly to fit the visible space. Combine it with scroll-snap to get the page views to snap as the user scrolls. You get paged layout. That’s interesting for Rachel because she’s used to designing for paged layout in print versus continuous layout on the web.

What’s next for CSS grid? Grid layout has been the biggest problem-solver of recent years. But that doesn’t mean it has solved all the layout problems. New problems appear as we start to work with CSS grid. We often end up nesting grids. But the nested grids don’t have any knowledge of one another. We’re back to: you never know how tall things are on the web. We need a way to have a relationship. Some kind of, I don’t know, subgrid.

You could use display: contents. It removes a box from the visual display allowing grandchildren to act like children. The browser support is good, but there’s a stonking accessibility bug so don’t use it in production. Also you can’t apply visual styles to anything that’s got display: contents on it. But grid-template-rows: subgrid will solve this problem. The spec is in a good shape. We’re waiting for the first browser implementations.

You will hit problems. Find new technical limitations. It’s just that we can’t do that stuff yet. We get the new stuff when we create it. Write up the problems you come across.

We’re finding the edges. Rachel is going to share her problems.

Rachel wants to put some text into her image grid. No problem. But then if there’s too much text, it might not fit in a height-restricted row. We can adjust the row to not be height-restricted, but then we lose the nice viewport-fitting layout.

In continuous media—which is what the web is—content inside multicol gets longer and longer. You can fix the height of the container but then the columns get created horizontally. What if you could say, I want my multicol container to be, say 100vh high, but if the content overlows, create a new 100vh high container below. Overflow in the block dimension. Maybe that’ll be in the next version of the multicol spec.

Multicol doesn’t solve Rachel’s image grid situation. What she needs is a way for the text to fill up a box and then flow into another box. The content needs to be semantically marked up—not broken into separate chunks for layout—and we want the browser to figure out where to break that content and fill up available space.

It comes as a surprise to people that a lot of paged media—books, magazines—are laid out with CSS. It’s in the paged media module. Prince is a good example of a user agent that supports paged media. There’s the concept of a page box: a physical page into which content can go. You get to define the boxes with physical dimensions like inches or centimeters. You create a bunch of margin boxes with generated content. Enough pages are created to hold all the content. You create your page model and flow the content through it.

Maybe apps and websites with defined screens are not that different from paged contexts. There have been attempts to create CSS specs that would allow this kind of content-flowing behaviour. CSS regions was one attempt. There was -ms-flow-into and -ms-flow-from in the IE and Edge implementations. You had to apply -ms-flow-into to an iframe element.

Regions needs ready-prepared boxes for the content to flow into. But how can you know how many boxes to prepare in advance? You don’t know how big things will be on the web. Rachel has been told that there’s nothing wrong with the CSS regions spec because you can define a final bucket for all leftover content. That doesn’t seem like a viable solution.

CSS regions predated CSS grid and didn’t take off. Now that we’ve got grid, something like regions makes a lot of sense.

Web design has been involved in a constant battle with overflow. Whether it’s overflowing boxes, or there (not) being a fold, or multicol layout. Rachel thinks we can figure out a way to get regions to work. Perhaps regions paved the way for something better. Maybe it was just ahead of its time. There are a lot of things hidden away in CSS specs that never made it out: things that didn’t make sense until more advanced CSS came along.

Regions—like multicol—relies on fragmentation. Ever tried to stop a heading behind the last thing on a page in a print stylesheet? We need good support for break-inside, break-before, and break-after.

We create new things to solve problems. Maybe you don’t see the value of something like regions, but I bet there’s been something where you thought “I wish CSS could do this!”.

Rob wrote up a problem that he had with trying to have a floated element maintain its floatiness inside a grid. He saw it as a grid problem. Rachel saw it as an exclusions problem. Rob’s write-up was really valuable to demonstrate the need for exclusions. Writing things up is hugely valuable for pushing things forward. Write up your ideas—they’ll show us the use case.

Ask “why can’t I do that?” Let’s not fall into the temptation of making things grid-like just because we have CSS grid now. Keep pushing at the boundaries.

Many of things Rachel has shown—grid, exclusions, regions—were implemented by Microsoft. With Edge moving to a Chromium rendering engine, we must make sure that we maintain diversity of thought in the standards process. Voices other than those of rendering engines need to contribute to the discussion.

At a W3C meeting or standards discussion, the room should not be 60-70% Googlers.

More than ever, the web needs diversity of thought. Rachel isn’t having a dig at Google. This isn’t a fight between good and evil. It’s a fight against any monoculture. So please contribute. Get involved. Together we can work for the future of the web platform.

Generation Style by Eric Meyer

It’s time for the afternoon talks at An Event Apart in Seattle. We’re going to have back-to-back CSS, kicking off with Eric Meyer. His talk is called Generation Style. The blurb says:

Consider, if you will, CSS generated content. We can, and sometimes even do, use it to insert icons before or after pieces of text. Occasionally we even use it add a bit of extra information. And once upon a time, we pressed it into service as a hack to get containers to wrap around their floated children. That’s all fine—but what good is generated content, really? What can we do with it? What are its limitations? And how far can we push content generation in a new landscape full of flexible boxes, grids, and more? Join Eric as he turns a spotlight on generated content and shows how it can be a generator of creativity as well as a powerful, practical tool for everyday use.

Wish me luck, ‘cause I’m going to try to capture the sense of this presentation…

So we had a morning of personas and user journeys. This afternoon: code, baby! Eric is going to dive into a very specific corner of CSS—generated content. For an hour. Let’s do it!

He shows the CSS Generated Content Module Level 3. Eric wants to focus on one bit: the pseudo-elements ::before and ::after. What does pseudo-element mean?

You might have used one of these pseudo-elements for blockquotes. Perhaps you’ve put a great big quotation mark in front of them.

blockquote:: after {
    content: "“";
    font-size: 4em;
    opacity: 0.67;
/* placement styles here */
}

Why is Eric using ::after? Because you can. You can put the ::after content wherever you want. But if your placement styles fail, this isn’t a good place for the generated content. So don’t do this. Use ::before.

Another example of using generated content is putting icons beside certain links:

a[href$=".pdf"]::after {
    content: url(i/icon.png);
    height: 1em;
    margin-right: 0.5em;
    vertical-align: top;
}

But these icons look yucky. But if you use larger images, they will be shown full size. You only have so much control over what happens in there. I mean, that’s true of all CSS: think of CSS as a series of strong suggestions. But here, we have even less control than we’re used to. Why isn’t the image 1em tall like I’ve specified in the CSS? Well, the generated content box is 1em tall but the image is breaking out of this box. How about this:

a[href]::after * {
    max-width: 100%;
    max-height: 100%
}

This doesn’t work. The image isn’t an element so it can’t be selected for.

The way around it is to use background images instead:

a[href$=".pdf"]::after {
    content: '';
    height: 1em; width: 1em;
    margin-right: 0.5em;
    vertical-align: top;
    background: center/contain;
    background-image: url(i/icon.png);
}

Notice there’s a right margin there. That stretches out the width of the whole link. That’s exactly the same as if there were an actual span in there:

a[href$=".pdf"] span {
    height: 1em; width: 1em;
    margin-right: 0.5em;
    vertical-align: top;
    background: center/contain;
    background-image: url(i/icon.png);
}

So why use generated content instead of a span? So that you don’t have to put extra spans in your markup.

Generated content is great for things that work great when they’re there, but still work fine if they’re not. It’s progressive enhancement.

You’ve almost certainly used generated content for the clearfix hack.

.clearfix::after {
    content: '';
    display: table;
    clear: both;
}

Ask your parents. It’s when we wanted to make the containing element for a group of floating elements to encompass the height of those elements. Ancient history, right? Well, Eric is showing an example of a certain large media company today. There are a lot of clearfixes in there.

Eric makes the clearfix visible:

.clearfix::after {
    content: '';
    display: table;
    clear: both;
    border: 10px solid purple;
}

It looks like a span: a 10 pixel wide box. Now change the display property:

.clearfix::after {
    content: '';
    display: block;
    clear: both;
    border: 10px solid purple;
}

Now it behaves more like a div than a span.

The big question here is: who cares?

Let’s say we’re making a site about corduroy pillows (I hear they’re really making headlines).

<header>
<h1>Corduroy pillows</h1>
<p>Lorum ipsum...</p>
</header>

We can add a box under the header:

header::after {
    content: " ";
    display: block;
    height: 1em;
}

You can do stuff with that extra content, like using a linear gradient:

header::after {
    content: " ";
    display: block;
    height: 1em;
    background: linear-gradient(to right, #DDD, #000, #DDD) center / 100% 1px no-repeat;
}

The colour stops are #DDD, #000, and #DDD. You get this nice gradiated line under the header. You can chain a bunch of of radial gradients together to get some nice effects. You could mix in some background images too. Now you’ve got some on-brand separators. You could use generated content to add some “under construction” separators.

By the way, ever struggled to keep track of the order of backgrounds? Think about how you would order layers in Photoshop.

How about if we could use generated content to make design tools?

div[id]::before {
    content: attr(id);
}

Now the generated content is taken from the id attribute. You can make it look like Firebug:

div[id]::before {
    content: '#' attr(id);
    font: 0.75rem monospace;
    position: absolute;
    top: 0;
    left: 0;
    border: 1px dashed red;
    padding: 0 0.25em;
    background: #FFD;
}

You can even make the content cover the whole box with bottom and right values too:

div[id]::before {
    content: '#' attr(id);
    font: 0.75rem monospace;
    position: absolute;
    top: 0;
    left: 0;
    bottom: 0;
    right: 0;
    border: 1px dashed red;
    padding: 0 0.25em;
    background: #FFD8;
}

(And yes, that is a hex value with opacity.)

Let’s make it less code-y:

div[id]::before {
    content: attr(id);
    font: bold 1.5rem Georgia serif;
    position: absolute;
    top: 0;
    left: 0;
    bottom: 0;
    right: 0;
    border: 1px dashed red;
    padding: 0 0.25em;  
    background: #FFD8;
}

Throw in some text-shadow. Maybe some radial gradients. We’re at the wireframe stage. Let’s drop in some SVG images to show lines across the boxes.

How about automating design touches?

pre {
    padding: 0.75em 1.5em;
    background: #EEE;
    font: medium Consolas, monospace;
    position: relative;
}

Let’s say that applies to:

<pre class="css">
...
</pre>

You can generate labels with that class attribute:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Let’s align it to the top of it’s parent with negative margins:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    margin: -0.75em -1.5em 1em;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Or you can use absolute positioning:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    left: 0;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Now let’s change the writing mode:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    bottom: 0;
    writing-mode: vertical-rl;
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Now the text is running down the side, but it’s turned on its side. You can transform it:

pre::before {
    content: attr(class);
    display: block;
    padding: 0.25em 0 0.15em;
    position: absolute;
    top: 0;
    right: 0;
    bottom: 0;
    writing-mode: vertical-rl;
    transform: rotate(180deg);
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

But if you this, be careful. Your left margin is no longer on the left. Everything’s flipped around.

You could also update the generated content according to the value of the class attribute:

pre.css:: before {
    content: '{ CSS }';
}

pre.html::before {
    content: '< HTML >';
}

pre.js::before,
pre.javascript::before {
    content: '({ JS })();';
}

It’s presentational, so CSS feels like the right place to do this. But you can’t generate markup—just text. Angle brackets will be displayed in their raw form.

But positioning is so old-school. Let’s use CSS grid:

pre {
    display: grid;
    grid-template-columns: min-content 1fr;
    grid-gap: 0.75em;
}

pre::before {
    content: attr(class);
    margin: -1em 0;
    padding: 0.25em 0.1em 0.25em 0;
    writing-mode: vertical-rl;
    transform: rotate(180deg);
    font: bold 1em Noah, sans-serif;
    text-align: center;
    text-transform: uppercase;
}

Heck, you could get rid of the negative margins by putting the code content inside a code element and giving that a margin of 1em.

You can see generated content in action on the website of An Event Apart:

li.news::before {
    content: attr(data-cat);
    background-color: orange;
    color: white;
}

The data-cat attribute (which contains a category value) is displayed in the generated content.

Cool. That’s all stuff we can do now. What about next?

Well, suppose you had to put some legalese on your website. You could generate the numbers of nested sections:

h1 { counter-reset: section; }
h2 { counter-reset: subsection; }

Increment the numbers each time:

h2 { counter-increment: section; }
h3 { counter-increment: subsection; }

And display those values:

h2::before {
    content: counter(section) ".";
}
h2::before {
    content: counter(section) counter ":" (subsection, upper-roman);
}

Soon you’ll be able to cycle through a list of counter styles of your own creation with a @counter-style block.

But remember, if you really need that content to be visible for everyone, don’t rely on generated content: put it in your markup. It’s for styles.

So, generated content. It’s pretty cool. You can do some surprising things with it. Maybe ::before this talk, you didn’t think about generated content much, but ::after this talk ,you will.

Accessibility on The Session

I spent some time this weekend working on an accessibility issue over on The Session. Someone using VoiceOver on iOS was having a hard time with some multi-step forms.

These forms have been enhanced with some Ajax to add some motion design: instead of refreshing the whole page, the next form is grabbed from the server while the previous one swooshes off the screen.

You can see similar functionality—without the animation—wherever there’s pagination on the site.

The pagination is using Ajax to enhance regular prev/next links—here’s the code.

The multi-step forms are using Ajax to enhance regular form submissions—here’s the code for that.

Both of those are using a wrapper I wrote for XMLHttpRequest.

That wrapper also adds some ARIA attributes. The region of the page that will be updated gets an aria-live value of polite. Then, whenever new content is being injected, the same region gets an aria-busy value of true. Once the update is done, the aria-busy value gets changed back to false.

That all seems to work fine, but I was also giving the same region of the page an aria-atomic value of true. My thinking was that, because the whole region was going to be updated with new content from the server, it was safe to treat it as one self-contained unit. But it looks like this is what was causing the problem, especially when I was also adding and removing class values on the region in order to trigger animations. VoiceOver seemed to be getting a bit confused and overly verbose.

I’ve removed the aria-atomic attribute now. True to its name, I’m guessing it’s better suited to small areas of a document rather than big chunks. (If anyone has a good primer on when to use and when to avoid aria-atomic, I’m all ears).

I was glad I was able to find a fix—hopefully one that doesn’t negatively impact the experience in other screen readers. As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

It also led to a nice discussion with some of the screen-reader users on The Session.

For me, all of this really highlights the beauty of the web, when everyone is able to contribute to a community like The Session, regardless of what kind of software they may be using. In the tunes section, that’s really helped by the use of ABC notation, as I wrote five years ago:

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

Then, for those of use who can read sheet music, the text of the ABC notation is automatically turned into an SVG image using the brilliant abcjs. It’s like an enhancement that’s applied, I dunno, what’s the word …progressively.

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

Ch-ch-ch-changes

It’s browser updatin’ time! Firefox 65 just dropped. So did Chrome 72. Safari 12.1 is shipping with iOS 12.2.

It’s interesting to compare the release notes for each browser and see the different priorities reflected in them (this is another reason why browser diversity is A Good Thing).

A lot of the Firefox changes are updates to dev tools; they just keep getting better and better. In fact, I’m not sure “dev tools” is the right word for them. With their focus on layout, typography, and accessibility, “design tools” might be a better term.

Oh, and Firefox is shipping support for some CSS properties that really help with print style sheets, so I’m disproportionately pleased about that.

In Safari’s changes, I’m pleased to see that the datalist element is finally getting implemented. I’ve been a fan of that element for many years now. (Am I a dork for having favourite HTML elements? Or am I a dork for even having to ask that question?)

And, of course, it wouldn’t be a Safari release without a new made up meta tag. From the people who brought you such hits as viewport and apple-mobile-web-app-capable, comes …supported-color-schemes (Apple likes to make up meta tags almost as much as Google likes to make up rel values).

There’ll be a whole bunch of improvements in how progressive web apps will behave once they’ve been added to the home screen. We’ll finally get some state persistence if you navigate away from the window!

Updated the behavior of websites saved to the home screen on iOS to pause in the background instead of relaunching each time.

Maximiliano Firtman has a detailed list of the good, the bad, and the “not sure yet if good” for progressive web apps on iOS 12.2 beta. Thomas Steiner has also written up the progress of progressive web apps in iOS 12.2 beta. Both are published on Ev’s blog.

At first glance, the release notes for Chrome 72 are somewhat paltry. The big news doesn’t even seem to be listed there. Maximiliano Firtman again:

Chrome 72 for Android shipped the long-awaited Trusted Web Activity feature, which means we can now distribute PWAs in the Google Play Store!

Very interesting indeed! I’m not sure if I’m ready to face the Kafkaesque process of trying to add something to the Google Play Store just yet, but it’s great to know that I can. Combined with the improvements coming in iOS 12.2, these are exciting times for progressive web apps!

Writing for hiring

Cassie joined Clearleft as a junior front-end developer last year. It’s really wonderful having her around. It’s a win-win situation: she’s enthusiastic and eager to learn; I’m keen to help her skill up in any way I can. And it’s working out great for the company—she has already demonstrated that she can produce quality HTML and CSS.

I’m very happy about Cassie’s success, not just on a personal level, but also from a business perspective. Hiring people into junior roles—when you’ve got the time and ability to train them—is an excellent policy. Hiring Charlotte back in 2014 was Clearleft’s first foray into hiring for a junior front-end dev position and it was a huge success. Cassie is demonstrating that it wasn’t just a fluke.

Alas, we can’t only hire junior developers. We’ve got a lot of work in the pipeline right now and we’re going to need a full-time seasoned developer who can hit the ground running. That’s why Clearleft is recruiting for a senior front-end developer.

As lead developer, Danielle will make the hiring decision, but because she’s so busy on project work right now—hence the need to hire more people—I’m trying to help her out any way I can. I offered to write the job description.

Seeing as I couldn’t just write “A clone of Danielle, please”, I had to think about what makes for a great front-end developer who uses their experience wisely. But I didn’t want to create a list of requirements, and I certainly didn’t want to create a list of specific technologies.

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

Here’s what I wrote. I hope it’s okay. I don’t really have much to compare it to, other than what I don’t want it to be.

Have a read of it and see what you think. And if you’re an experienced front-end developer who’d like to work by the seaside, you should apply for the role.

Code print

You know what I like? Print stylesheets!

I mean, I’m not a huge fan of trying to get the damn things to work consistently—thanks, browsers—but I love the fact that they exist (athough I’ve come across a worrying number of web developers who weren’t aware of their existence). Print stylesheets are one more example of the assumption-puncturing nature of the web: don’t assume that everyone will be reading your content on a screen. News articles, blog posts, recipes, lyrics …there are many situations where a well-considered print stylesheet can make all the difference to the overall experience.

You know what I don’t like? QR codes!

It’s not because they’re ugly, or because they’ve been over-used by the advertising industry in completely inapropriate ways. No, I don’t like QR codes because they aren’t an open standard. Still, I must grudgingly admit that they’re a convenient way of providing a shortcut to a URL (albeit a completely opaque one—you never know if it’s actually going to take you to the URL it promises or to a Rick Astley video). And now that the parsing of QR codes is built into iOS without the need for any additional application, the barrier to usage is lower than ever.

So much as I might grit my teeth, QR codes and print stylesheets make for good bedfellows.

I picked up a handy tip from a Smashing Magazine article about print stylesheets a few years back. You can the combination of a @media print and generated content to provide a QR code for the URL of the page being printed out. Google’s Chart API provides a really handy shortcut for generating QR codes:

https://chart.googleapis.com/chart?cht=qr&chs=150x150&chl=http://example.com

Except that there’s no telling how long that will continue to work. Google being Google, they’ve deprecated the simple image chart API in favour of the over-engineered JavaScript alternative. So just as I recently had to migrate all my maps over to Leaflet when Google changed their Maps API from under the feet of developers, the clock is ticking on when I’ll have to find an alternative to the Image Charts API.

For now, I’ve got the QR code generation happening on The Session for individual discussions, events, recordings, sessions, and tunes. For the tunes, there’s also a separate URL for each setting of a tune, specifically for printing out. I’ve added a QR code there too.

Experimenting with print stylesheets and QR codes.

I’ve been thinking about another potential use for QR codes. I’m preparing a new talk for An Event Apart Seattle. The talk is going to be quite practical—for a change—and I’m going to be encouraging people to visit some URLs. It might be fun to include the biggest possible QR code on a slide.

I’d better generate the images before Google shuts down that API.

Programming CSS

There’s a worrying tendency for “real” programmers look down their noses at CSS. It’s just a declarative language, they point out, not a fully-featured programming language. Heck, it isn’t even a scripting language.

That may be true, but that doesn’t mean that CSS isn’t powerful. It’s just powerful in different ways to traditional languages.

Take CSS selectors, for example. At the most basic level, they work like conditional statments. Here’s a standard if statement:

if (condition) {
// code here
}

The condition needs to evaluate to true in order for the code in the curly braces to be executed. Sound familiar?

condition {
// styles here
}

That’s a very simple mapping, but what if the conditional statement is more complicated?

if (condition1 && condition2) {
// code here
}

Well, that’s what the decendant selector does:

condition1 condition2 {
// styles here
}

In fact, we can get even more specific than that by using the child combinator, the sibling combinator, and the adjacent sibling combinator:

  • condition1 > condition2
  • condition1 ~ condition2
  • condition2 + condition2

AND is just one part of Boolean logic. There’s also OR:

if (condition1 || condition2) {
// code here
}

In CSS, we use commas:

condition1, condition2 {
// styles here
}

We’ve even got the :not() pseudo-class to complete the set of Boolean possibilities. Once you add quantity queries into the mix, made possible by :nth-child and its ilk, CSS starts to look Turing complete. I’ve seen people build state machines using the adjacent sibling combinator and the :checked pseudo-class.

Anyway, my point here is that CSS selectors are really powerful. And yet, quite often we deliberately choose not to use that power. The entire raison d’être for OOCSS, BEM, and Smacss is to deliberately limit the power of selectors, restricting them to class selectors only.

On the face of it, this might seem like an odd choice. After all, we wouldn’t deliberately limit ourselves to a subset of a programming language, would we?

We would and we do. That’s what templating languages are for. Whether it’s PHP’s Smarty or Twig, or JavaScript’s Mustache, Nunjucks, or Handlebars, they all work by providing a deliberately small subset of features. Some pride themselves on being logic-less. If you find yourself trying to do something that the templating language doesn’t provide, that’s a good sign that you shouldn’t be trying to do it in the template at all; it should be in the controller.

So templating languages exist to enforce simplicity and ensure that the complexity happens somewhere else. It’s a similar story with BEM et al. If you find you can’t select something in the CSS, that’s a sign that you probably need to add another class name to the HTML. The complexity is confined to the markup in order to keep the CSS more straightforward, modular, and maintainable.

But let’s not forget that that’s a choice. It’s not that CSS in inherently incapable of executing complex conditions. Quite the opposite. It’s precisely because CSS selectors (and the cascade) are so powerful that we choose to put guard rails in place.

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Service workers and browser extensions

I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…

There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.

And that’s when I remember: browser extensions.

I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).

If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.

This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch clauses of fetch statements in your service worker.

So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.

From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”

I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.

I’m going to make a checklist. Then I’m going to use it …every time.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…