Journal tags: eme

52

sparkline

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {
...
}

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
}
body {
  background-color: var(--background-color);
  color: var(--text-color);
}
a {
  color: var(--link-color);
}

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;
  }
}

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%
);

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);
  }
}

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;
}

But you can also do it in HTML:

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:

<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
<img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">
</picture>

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Drag’n’drop revisited

I got a message from a screen-reader user of The Session recently, letting me know of a problem they were having. I love getting any kind of feedback around accessibility, so this was like gold dust to me.

They pointed out that the drag’n’drop interface for rearranging the order of tunes in a set was inaccessible.

Drag and drop

Of course! I slapped my forehead. How could I have missed this?

It had been a while since I had implemented that functionality, so before even looking at the existing code, I started to think about how I could improve the situation. Maybe I could capture keystroke events from the arrow keys and announce changes via ARIA values? That sounded a bit heavy-handed though: mess with people’s native keyboard functionality at your peril.

Then I looked at the code. That was when I realised that the fix was going to be much, much easier than I thought.

I documented my process of adding the drag’n’drop functionality back in 2016. Past me had his progressive enhancement hat on:

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

The problem was in my feature detection:

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled.

The logic was fine, but the execution was flawed. I was being lazy and hiding the select elements with display: none. That hides them visually, but it also hides them from screen readers. I swapped out that style declaration for one that visually hides the elements, but keeps them accessible and focusable.

It was a very quick fix. I had the odd sensation of wanting to thank Past Me for making things easy for Present Me. But I don’t want to talk about time travel because if we start talking about it then we’re going to be here all day talking about it, making diagrams with straws.

I pushed the fix, told the screen-reader user who originally contacted me, and got a reply back saying that everything was working great now. Success!

CSS custom properties in generated content

Cassie posted a neat tiny lesson that she’s written a reduced test case for.

Here’s the situation…

CSS custom properties are fantastic. You can drop them in just about anywhere that a property takes a value.

Here’s an example of defining a custom property for a length:

:root {
    --my-value: 1em;
}

Then I can use that anywhere I’d normally give something a length:

.my-element {
    margin-bottom: var(--my-value);
}

I went a bit overboard with custom properties on the new Patterns Day site. I used them for colour values, font stacks, and spacing. Design tokens, I guess. They really come into their own when you combine them with media queries: you can update the values of the custom properties based on screen size …without having to redefine where those properties are applied. Also, they can be updated via JavaScript so they make for a great common language between CSS and JavaScript: you can define where they’re used in your CSS and then update their values in JavaScript, perhaps in response to user interaction.

But there are a few places where you can’t use custom properties. You can’t, for example, use them as part of a media query. This won’t work:

@media all and (min-width: var(--my-value)) {
    ...
}

You also can’t use them in generated content if the value is a number. This won’t work:

:root {
    --number-value: 15;
}
.my-element::before {
    content: var(--number-value);
}

Fair enough. Generated content in CSS is kind of a strange beast. Eric delivered an entire hour-long talk at An Event Apart in Seattle on generated content.

But Cassie found a workaround if the value you want to put into that content property is numeric. The CSS counter value is a kind of generated content—the numbers that appear in front of ordered list items. And you can control the value of those numbers from CSS.

CSS counters work kind of like variables. You name them and assign values to them using the counter-reset property:

.my-element {
    counter-reset: mycounter 15;
}

You can then reference the value of mycounter in a content property using the counter value:

.my-element {
    content: counter(mycounter);
}

Cassie realised that even though you can’t pass in a custom property directly to generated content, you can pass in a custom property to the counter-reset property. So you can do this:

:root {
    --number-value: 15;
}
.my-element {
    counter-reset: mycounter var(--number-value);
    content: counter(mycounter);
}

In a roundabout way, this allows you to use a custom property for generated content!

I realise that the use cases are pretty narrow, but I can’t help but be impressed with the thinking behind this. Personally, I would’ve just read that generated content doesn’t accept custom properties and moved on. I would’ve given up quickly. But Cassie took a step back and found a creative pass-the-parcel solution to the problem.

I feel like this is a hack in the best sense of the word: a creatively improvised solution to a problem or limitation.

I was trying to display the numeric value stored in a CSS variable inside generated content… Turns out you can’t do that. But you can do this… codepen.io/cassie-codes/p… (not saying you should, but you could)

Handing back control

An Event Apart Seattle was most excellent. This year, the AEA team are trying something different and making each event three days long. That’s a lot of mindblowing content!

What always fascinates me at events like these is the way that some themes seem to emerge, without any prior collusion between the speakers. This time, I felt that there was a strong thread of giving control directly to users:

Sarah and Margot both touched on this when talking about authenticity in brand messaging.

Margot described this in terms of vulnerability for the brand, but the kind of vulnerability that leads to trust.

Sarah talked about it in terms of respect—respecting the privacy of users, and respecting the way that they want to use your services. Call it compassion, call it empathy, or call it just good business sense, but providing these kind of controls in an interface is an excellent long-term strategy.

In Val’s animation talk, she did a deep dive into prefers-reduced-motion, a media query that deliberately hands control back to the user.

Even in a CSS-heavy talk like Jen’s, she took the time to explain why starting with meaningful markup is so important—it’s because you can’t control how the user will access your content. They may use tools like reader modes, or Pocket, or have web pages read aloud to them. The user has the final say, and rightly so.

In his CSS talk, Eric reminded us that a style sheet is a list of strong suggestions, not instructions.

Beth’s talk was probably the most explicit on the theme of returning control to users. She drew on examples from beyond the world of the web—from architecture, urban planning, and more—to show that the most successful systems are not imposed from the top down, but involve everyone, especially those most marginalised.

And even in my own talk on service workers, I raved about the design pattern of allowing users to save pages offline to read later. Instead of trying to guess what the user wants, give them the means to take control.

I was really encouraged to see this theme emerge. Mind you, when I look at the reality of most web products, it’s easy to get discouraged. Far from providing their users with controls over their own content, Instagram won’t even let their customers have a chronological feed. And Matt recently wrote about how both Twitter and Quora are heading further and further away from giving control to their users in his piece called Optimizing for outrage.

Still, I came away from An Event Apart Seattle with a renewed determination to do my part in giving people more control over the products and services we design and develop.

I spent the first two days of the conference trying to liveblog as much as I could. I find it really focuses my attention, although it’s also quite knackering. I didn’t do too badly; I managed to write cover eleven of the talks (out of the conference’s total of seventeen):

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

Acknowledgements

It feels a little strange to refer to Going Offline as “my” book. I may have written most of the words in it, but it was only thanks to the work of others that they ended up being the right words in the right order in the right format.

I’ve included acknowledgements in the book, but I thought it would be good to reproduce them here in the form of hypertext…

Everyone should experience the joy of working with Katel LeDû and Lisa Maria Martin. From the first discussions right up until the final last-minute tweaks, they were unflaggingly fun to collaborate with. Thank you, Katel, for turning my idea into reality. Thank you, Lisa Maria, for turning my initial mush of words into a far more coherent mush of words.

Jake Archibald and Amber Wilson were the best of technical editors. Jake literally wrote the spec on service workers so I knew I could rely on him to let me know whenever I made any factual missteps. Meanwhile Amber kept me on the straight and narrow, letting me know whenever the writing was becoming unclear. Thank you both for being so generous with your time.

Thanks to my fellow Clearlefty Danielle Huntrods for giving me feedback as the book developed.

Finally, I want to express my heartfelt thanks to everyone who has ever taken the time to write on their website about their experiences with service workers. Lyza Gardner, Ire Aderinokun, Una Kravets, Mariko Kosaka, Jason Grigsby, Ethan Marcotte, Mike Riethmuller, and others inspired me with their generosity. Thank you to everyone who’s making the web better through such kind acts of openness. To quote the original motto of the World Wide Web project, let’s share what we know.

Understandable excitement

An Event Apart Seattle just wrapped. It was a three-day special edition and it was really rather good. Lots of the speakers (myself included) were unveiling brand new talks, so there was a real frisson of excitement.

It was interesting to see repeating, overlapping themes. From a purely technical perspective, three technologies that were front and centre were:

  • CSS grid,
  • variable fonts, and
  • service workers.

From listening to other attendees, the overwhelming message received was “These technologies are here—they’ve arrived.” Now, depending on your mindset, that understanding can be expressed as “Oh shit! These technologies are here!” or “Yay! Finally! These technologies are here!”

My reaction is very firmly the latter. That in itself is an interesting data-point, because (as discussed in my talk) my reaction towards new technological advances isn’t always one of excitement—quite often it’s one of apprehension, even fear.

I’ve been trying to self-analyse to figure out which kinds of technologies trigger which kind of reaction. I don’t have any firm answers yet, but it’s interesting to note that the three technologies mentioned above (CSS grid, variable fonts, and service workers) are all additions to the core languages of the web—the materials we use to build the web. Frameworks, libraries, build tools, and other such technologies are more like tools than materials. I tend to get less excited about advances in those areas. Sometimes advances in those areas not only fail to trigger excitement, they make me feel overwhelmed and worried about falling behind.

Since figuring out this split between materials and tools, it has helped me come to terms with my gut emotional reaction to the latest technological advances on the web. I think it’s okay that I don’t get excited about everything. And given the choice, I think maybe it’s healthier to be more excited about advances in the materials—HTML, CSS, and JavaScript APIs—than advances in tooling …although, it is, of course, perfectly possible to get equally excited about both (that’s just not something I seem to be able to do).

Another split I’ve noticed is between technologies that directly benefit users, and technologies that directly benefit developers. I think there was a bit of a meta-thread running through the talks at An Event Apart about CSS grid, variable fonts, and service workers: all of those advances allow us developers to accomplish more with less. They’re good for performance, in other words. I get much more nervous about CSS frameworks and JavaScript libraries that allow us to accomplish more, but require the user to download the framework or library first. It feels different when something is baked into browsers—support for CSS features, or JavaScript APIs. Then it feels like much more of a win-win situation for users and developers. If anything, the onus is on developers to take the time and do the work and get to grips with these browser-native technologies. I’m okay with that.

Anyway, all of this helps me understand my feelings at the end of An Event Apart Seattle. I’m fired up and eager to make something with CSS grid, variable fonts, and—of course—service workers.

Navigating Team Friction by Lara Hogan

It’s day two of An Event Apart Seattle (Special Edition). Lara is here to tell us about Navigating Team Friction. These are my notes…

Lara started as a developer, and then moved into management. Now she consults with other organisations. So she’s worked with teams of all sizes, and her conclusion is that humans are amazing. She has seen teams bring a site down; she has seen teams ship amazing features; she has seen teams fall apart because they had to move desks. But it’s magical that people can come together and build something.

Bruce Tuckman carried out research into the theory of group dynamics. He published stages of group development. The four common stages are:

  1. Forming. The group is coming together. There is excitement.
  2. Storming. This is when we start to see some friction. This is necessary.
  3. Norming. Things start to iron themselves out.
  4. Performing. Now you’re in the flow state and you’re shipping.

So if your team is storming (experiencing friction), that’s absolutely normal. It might be because of disagreement about processes. But you need to move past the friction. Team friction impacts your co-workers, company, and users.

An example. Two engineers passively-aggressively commenting each other’s code reviews; they feign surprise at the other’s technology choices; one rewrites the others code; one ships to production with code review; a senior team member or manager has to step in. But it costs a surprising amount of time and energy before a manager even notices to step in.

Brains

The Hulk gets angry. This is human. We transform into different versions of ourselves when we are overcome by our emotions.

Lara has learned a lot about management by reading about how our brains work. We have a rational part of our brain, the pre-frontal cortex. It’s very different to our amygdala, a much more primal part of our brain. It categorises input into either threat or reward. If a threat is dangerous enough, the amygdala takes over. The pre-frontal cortex is too slow to handle dangerous situations. So when you have a Hulk moment, that was probably an amygdala hijack.

We have six core needs that are open to being threatened (leading to an amygdala hijacking):

  1. Belonging. Community, connection; the need to belong to a tribe. From an evolutionary perspective, this makes sense—we are social animals.
  2. Improvement/Progress. Progress towards purpose, improving the lives of others. We need to feel that we do matters, and that we are learning.
  3. Choice. Flexibility, autonomy, decision-making. The power to make decisions over your own work.
  4. Equality/Fairness. Access to resources and information; equal reciprocity. We have an inherent desire for fairness.
  5. Predictability. Resources, time, direction future challenges. We don’t like too many surprises …but we don’t like too much routine either. We want a balance.
  6. Significance. Status, visibility, recognition. We want to feel important. Being assigned to a project you think is useless feels awful.

Those core needs are B.I.C.E.P.S. Thinking back to your own Hulk moment, which of those needs was threatened?

We value those needs differently. Knowing your core needs is valuable.

Desk Moves

Lara has seen the largest displays of human emotion during something as small as moving desks. When you’re asked to move your desk, your core need of “Belonging” may be threatened. Or it may be a surprise that disrupts the core need of “Improvement/Progress.” If a desk move is dictated to you, it feels like “Choice” is threatened. The move may feel like it favours some people over others, threatening “Equality/Fairness.” The “Predictability” core need may be threatened by an unexpected desk move. If the desk move feels like a demotion, your core need of “Significance” will be threatened.

We are not mind readers, so we can’t see when someone’s amygdala takes over. But we can look out for the signs. Forms of resistance can be interpreted as data. The most common responses when a threat is detected are:

  1. Doubt. People double-down on the status quo; they question the decision.
  2. Avoidance. Avoiding the problem; too busy to help with the situation.
  3. Fighting. People create arguments against the decision. They’ll use any logic they can. Or they simply refuse.
  4. Bonding. Finding someone else who is also threatened and grouping together.
  5. Escape-route. Avoiding the threat by leaving the company.

All of these signals are data. Rather than getting frustrated with these behaviours, use them as valuable data. Try not to feel threatened yourself by any of these behaviours.

Open questions are powerful tool in your toolbox. Asked from a place of genuine honesty and curiosity, open questions help people feel less threatened. Closed questions are questions that can be answered with “yes” or “no”. When you spot resistance, get some one-on-one time and try to ask open questions:

  • What do you think folks are liking or disliking about this so far?
  • I wanted to get your take on X. What might go wrong? What do you think might be good about it?
  • What feels most upsetting about this?

You can use open questions like these to map resistance to threatened core needs. Then you can address those core needs.

This is a good time to loop in your manager. It can be very helpful to bounce your data off someone else and get their help. De-escalating resistance is a team effort.

Communication ✨

Listen with compassion, kindness, and awareness.

  • Reflect on the dynamics in the room. Maybe somebody thinks a topic is very important to them. Be aware of your medium. Your body language; your tone of voice; being efficient with words could be interpreted as a threat. Consider the room’s power dynamics. Be aware of how influential your words could be. Is this person in a position to take the action I’m suggesting?
  • Elevate the conversation. Meet transparency with responsibility.
  • Assume best intentions. Remember the prime directive. Practice empathy. Ask yourself what else is going on for this person in their life.
  • Listen to learn. Stay genuinely curious. This is really hard. Remember your goal is to understand, not make judgement. Prepare to be surprised when you walk into a room. Operate under the assumption that you don’t have the whole story. Be willing to have your mind changed …no, be excited to have your mind changed!

This tips are part of mindful communication. amy.tech has some great advice for mindful communication in code reviews.

Feedback

Mindful communication won’t solve all your problems. There are times when you’ll have to give actionable feedback. The problem is that humans are bad at giving feedback, and we’re really bad at receiving feedback. We actively avoid feedback. Sometimes we try to give constructive feedback in a compliment sandwich—don’t do that.

We can get better at giving and receiving feedback.

Ever had someone say, “Hey, you’re doing a great job!” It feels good for a few minutes, but what we crave is feedback that addresses our core needs.

GeneralSpecific and Actionable
Positive Feedback
Negative Feedback

The feedback equation starts with an observation (“You’re emails are often short”)—it’s not how you feel about the behaviour. Next, describe the impact of the behaviour (“The terseness of your emails makes me confused”). Then pose a question or request (“Can you explain why you write your emails that way?”).

observation + impact + question/request

Ask people about their preferred feedback medium. Some people prefer to receive feedback right away. Others prefer to digest it. Ask people if it’s a good time to give them feedback. Pro tip: when you give feedback, ask people how they’d like to receive feedback in the future.

Prepare your brain to receive feedback. It takes six seconds for your amygdala to chill out. Take six seconds before responding. If you can’t de-escalate your amygdala, ask the person giving feedback to come back later.

Think about one piece of feedback you’ll ask for back at work. Write it down. When your back at work, ask about it.

You’ll start to notice when your amygdala or pre-frontal cortex is taking over.

Prevention

Talking one-on-one is the best way to avoid team friction.

Retrospectives are a great way of normalising of talking about Hard Things and team friction.

It can be helpful to have a living document that states team processes and expectations (how code reviews are done; how much time is expected for mentoring). Having it written down makes it a North star you can reference.

Mapping out roles and responsibilities is helpful. There will be overlaps in that Venn diagram. The edges will be fuzzy.

What if you disagree with what management says? The absence of trust is at the centre of most friction.

DisgreeAgree
CommitMature and TransparentEasiest
Don’t CommitAcceptable but ToughBad Things

Practice finding other ways to address B.I.C.E.P.S. You might not to be able to fix the problem directly—the desk move still has to happen.

But no matter how empathic or mindful you are, sometimes it will be necessary to bring in leadership or HR. Loop them in. Restate the observation + impact. State what’s been tried, and what you think could help now. Throughout this process, take care of yourself.

Remember, storming is natural. You are now well-equipped to weather that storm.

See also:

Announcing Going Offline from A Book Apart

I decided that I wanted a new mug.

I already have one very nice mug. It was sent to me by A Book Apart because I wrote the book HTML5 For Web Designers back in 2010. If I wanted another nice mug, it was clear what I had to do. I had to write another book.

So I’ve written a book. It’s called Going Offline and it’s available to pre-order now. It will start shipping a few weeks from now.

I think you will enjoy this book. Here’s why…

You have a website or you make websites for other people. You’re comfortable with HTML and CSS, but maybe you’re a bit apprehensive about JavaScript (like me). You keep hearing lots of talk about service workers and progressive web apps. You’re intrigued. But you’re put off by the resources out there. They all assume a certain level of JavaScript knowledge. What you need is a step-by-step guide to help you make your website work offline …a guide that won’t assume you’re already comfortable with code.

Does that sound like you? Then Going Offline is for you.

Thinking about it, a more accurate title for the book would’ve been Service Workers For Web Designers …although even that would assume too much existing knowledge (like, what the heck a service worker is in the first place).

Pre-order Going Offline today and it will be in your hands in just a few weeks.

Alas, I have no idea when my new mug will be ready.

A workshop on building for resilience

In February, I tried out a new workshop two times—once at Webstock in New Zealand, and once in Hong Kong.

The workshop is called The Progressive Web: Building for Resilience. Here’s an excerpt form the blurb:

This workshop will show you to to think in a progressive way that works with the grain of the web. Together we’ll peel back the layers of the web and build upwards, creating experiences that work for everyone while making the best of cutting-edge browser technologies. From URL design to Progressive Web Apps, this journey will cover each stage of technological advancement.

Basically, it’s the workshop version of Resilient Web Design. If that book is the theory, this workshop is the practice.

Tim recently posted his tips for running workshops and there’s a lot in there that resonates with me. Like Tim, I’ve become less and less reliant on slides. In fact, this workshop—like my workshop on evaluating technology—has no slides. Instead it’s all about the exercises and going with the flow.

After starting with a warm-up, I canvas the room to see if there any specific topics, tools or technologies that people are particularly interested in covering. I’ll note those (on post-its slapped on the wall) for reference throughout the day, to try to make sure that those particular things are touched on at some point. Then I start with a thought experiment…

First of all, I get everyone to call out websites, services and apps that they use almost every day: Twitter, Facebook, Gmail, Slack, Google Docs, and so on. Those all get documented on the wall. Then it’s time to ask of each product, “What is the core functionality?” The idea here is to get beneath the surface-level verbs like swiping, tapping and dragging to get to the real purpose of a service: buying, selling, sharing, reading, writing, collaborating, and so on.

At this point I inform the attendees that the year is 1995. And now we’re going to build these services using the technology of this time. This is a playful way of getting answers to the question “What’s the simplest technology to enable the core functionality?” It’s mostly forms, links, and lots of heavy lifting on the server.

Then the real fun begins. “Enhance!” Moving forward in time, we get to add styles, we add interactivity with JavaScript, then Ajax, and then we get to really have fun with technologies like web sockets, geolocation, local storage, right the way up to service workers, notifications, and background sync. And the beauty of it all is that, if any of those technologies aren’t supported in a particular browser or device, the core functionality is still available.

Next, we apply this layered mindset to a new service. I split the attendees into groups, and each of them gets a procedurally-generated startup idea …generated by shuffling some cards. This is an exercise I first tried when I was teaching in Porto:

I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

The first few exercises are good creative fun: come up with a name, then a logo, then a business model. Then it’s time to build. It starts with URL design. Then it’s content prioritisation (for a representative URL). Then it’s layout (sketching!). The enhancements have begun. “How might this URL benefit from Ajax?” “How might this URL benefit from geolocation?” “How might this URL benefit from offline storage?” “How might this URL benefit from a service worker?”

Workshop team 4 Workshop team 3 Workshop team 2 Workshop team 1

At this point, we’ve applied the layered, progressive approach at the scale of an entire service, and at the scale of an individual URL. Finally, we apply the same approach at the level of a component. It might be a navigation, or a carousel, or an interactive widget. In each case, the same process applies: “What’s the core functionality? What’s the simplest technology to enable that functionality? Enhance!”

Along the way, there are plenty of rabbit holes we can go down. Whether it’s accessibility, or progressive web apps, or pattern libraries, I go along with whatever people are curious about. But all of it ties back to the progressive, layered mindset I’m hoping to foster.

By the end of the day, I’m hoping that an attendee has one of two reactions:

  1. “What a waste of time! Everything in that workshop was blindingly obvious!” (in which case, excellent!—they’re already thinking in a progressive way), or
  2. “That workshop has completely changed the way I think about building on the web!” (I’m being hyperbolic here, but at the very least I’m hoping to impart a new perspective).

Having given the workshop a few times, I’m really pleased with how it went (and more important, I’m pleased that people enjoyed it). If this sounds like something that your company or team would enjoy, get in touch and we can take it from there.

Offline itineraries with service workers

The Trivago website is a progressive web app. That means it

  1. is served over HTTPS,
  2. has a web app manifest JSON file, and it
  3. has a service worker script.

The service worker provides an opportunity for a nice bit of fun branding—if you lose your internet connection, the site provides a neat little maze game you can play. Cute!

That’s a fairly simple example of how service workers can enhance the user experience when the dreaded offline situation arises. But it strikes me that the travel industry is the perfect place to imagine other opportunities for offline enhancements.

Travel sites often provide itineraries—think airlines, trains, or hotels. The itineraries consist of places, times, and contact information. This is exactly the kind of information that you might find yourself trying to retrieve in an emergency situation, like maybe in a cab on the way to the airport or train station. Perhaps you’re stuck in traffic, in a tunnel. Or maybe you don’t have a data plan for the country you’re currently in. Either way, wouldn’t it be great if you could hit the website for your airline or hotel and get your itinerary, even if you’re offline.

Alright, let’s think this through…

Let’s assume that an individual itinerary has its own URL. That URL is a web page of information, mostly text, with perhaps an image or two (like a map). Now when you make your booking, let’s have the service worker cache that URL (and its assets) for offline access.

Hmm …but there’s a good chance that the device you make the booking on is not the same device that you’d have with you out and and about. Because caches are local to the browser, that’s a problem.

Okay, but of these kinds of sites have some kind of log-in mechanism. So we could update the log-in flow a bit: when a user logs in, check to see if they have any itineraries assigned to them, and if they do, fire off an event to the service worker (using postMessage) to cache the URLs of the itineraries.

Now that the itineraries are cached, the final step is to create a custom offline page. As well as the usual “Sorry, the internet’s down” message, we can say “Sorry, the internet’s down …but here are your itineraries”. (This is kind of like the pattern you see on blogs like mine, Ethan’s, or Mike’s—a custom offline page that lists cached URLs of articles you’ve previously visited).

That’s just one pattern off the top of my head. It’s fun to imagine the different ways that service workers could be used to enhance the experience of just about any site, but they seem particularly relevant to travel sites—dodgy internet connections and travelling go hand-in-hand. At Clearleft, we’ve been working with quite a few travel-related clients lately so that’s why these scenarios are on my mind: booking holidays, flights, and so on. But, as I’ve said before and I’ll say again, every website can benefit from becoming a progressive web app.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Resilience retires

I spoke at the GOTO conference in Berlin this week. It was the final outing of a talk I’ve been giving for about a year now called Resilience.

Looking back over my speaking engagements, I reckon I must have given this talk—in one form or another—about sixteen times. If by some statistical fluke or through skilled avoidance strategies you managed not to see the talk, you can still have it rammed down your throat by reading a transcript of the presentation.

That particular outing is from Beyond Tellerrand earlier this year in Düsseldorf. That’s one of the events that recorded a video of the talk. Here are all the videos of it I could find:

Or, if you prefer, here’s an audio file. And here are the slides but they won’t make much sense by themselves.

Resilience is a mixture of history lesson and design strategy. The history lesson is about the origins of the internet and the World Wide Web. The design strategy is a three-pronged approach:

  1. Identify core functionality.
  2. Make that functionality available using the simplest technology.
  3. Enhance!

And if you like that tweet-sized strategy, you can get it on a poster. Oh, and check this out: Belgian student Sébastian Seghers published a school project on the talk.

Now, you might be thinking that the three-headed strategy sounds an awful lot like progressive enhancement, and you’d be right. I think every talk I’ve ever given has been about progressive enhancement to some degree. But with this presentation I set myself a challenge: to talk about progressive enhancement without ever using the phrase “progressive enhancement”. This is something I wrote about last year—if the term “progressive enhancement” is commonly misunderstood by the very people who would benefit from hearing this message, maybe it’s best to not mention that term and talk about the benefits of progressive enhancement instead: robustness, resilience, and technical credit. I think that little semantic experiment was pretty successful.

While the time has definitely come to retire the presentation, I’m pretty pleased with it, and I feel like it got better with time as I adjusted the material. The most common format for the talk was 40 to 45 minutes long, but there was an extended hour-long “director’s cut” that only appeared at An Event Apart. That included an entire subplot about Arthur C. Clarke and the invention of the telegraph (I’m still pretty pleased with the segue I found to weave those particular threads together).

Anyway, with the Resilience talk behind me, my mind is now occupied with the sequel: Evaluating Technology. I recently shared my research material for this one and, as you may have gathered, it takes me a loooong time to put a presentation like this together (which, by the same token, is one of the reasons why I end up giving the same talk multiple times within a year).

This new talk had its debut at An Event Apart in San Francisco two weeks ago. Jeffrey wrote about it and I’m happy to say he liked it. This bodes well—I’m already booked in for An Event Apart Seattle in April. I’ll also be giving an abridged version of this new talk at next year’s Render conference.

But that’s it for my speaking schedule for now. 2016 is all done and dusted, and 2017 is looking wide open. I hope I’ll get some more opportunities to refine and adjust the Evaluating Technology talk at some more events. If you’re a conference organiser and it sounds like something you’d be interested in, get in touch.

In the meantime, it’s time for me to pack away the Resilience talk, and wheel down into the archives, just like the closing scene of Raiders Of The Lost Ark. The music swells. The credits roll. The image fades to black.

Choice

Laurie Voss has written a thoughtful article called Web development has two flavors of graceful degradation in response to Nolan Lawson’s recent article. But I’m afraid I don’t agree with Laurie’s central premise:

…web app development and web site development are so different now that they probably shouldn’t be called the same thing anymore.

This is an idea I keep returning to, and each time I do, I find that it just isn’t that simple. There are very few web thangs that are purely interactive without any content, and there are also very few web thangs that are purely passive without any interaction. Instead, it’s a spectrum. Quite often, the position on that spectrum changes according to the needs of the user at any particular time—are Twitter and Flicker web sites while I’m viewing text and images, but then transmogrify into web apps the moment I want add, update, or delete a piece of text or an image?

In any case, the more interesting question than “is something a web site or a web app?” is the question “why?” Why does it matter? In my experience, the answer to that question generally comes down to the kind of architectural approach that a developer will take.

That’s exactly what Laurie dives into in his post. For web apps, use one architectural approach—for web sites, use a different architectural approach. To summarise:

  • in a web app, front-load everything and rely on client-side JavaScript for all subsequent interaction,
  • in a web site, optimise for many page loads, and make sure you don’t rely on client-side JavaScript.

I’m oversimplifying here, but the general idea is:

  • build web apps with the single page app architecture,
  • build web sites with progressive enhancement.

That’s sensible advice, but I’m worried that it could lead to a tautological definition of what constitutes a web app:

  1. This is a web app so it’s built as a single page app.
  2. Why do you define it as a web app?
  3. Because it’s built as a single page app.

The underlying question of what makes something a web app is bypassed by the architectural considerations …but the architectural considerations should be based on that underlying question. Laurie says:

If you are developing an app, the user ideally loads the app exactly once — whether it’s over a slow connection or not.

And similarly:

But if you are developing a web site consisting of many discrete pages, the act of loading goes from a single event to the most common event.

I completely agree that the architectural approach of single page apps is better suited to some kinds of web thangs more than others. It’s a poor architectural choice for a content-based site like nasa.gov, for example. Progressive enhancement would make more sense there.

But I don’t think that the architectural choices need to be in opposition. It’s entirely possible to reconcile the two. It’s not always easy—and the further along that spectrum you are, the tougher it gets—but it’s doable. You can begin with progressive enhancement, and then build up to a single page app architecture for more capable browsers.

I think that’s going to get easier as frameworks adopt a more mixed approach. Almost all the major libraries are working on server-side rendering as a default. Ember is leading the way with FastBoot, and Angular Universal is following. Neither of them are doing it for reasons of progressive enhancement—they’re doing it for performance and SEO—but the upshot is that you can more easily build a web app that simultaneously uses progressive enhancement and a single-page app model.

I guess my point is that I don’t think we should get too locked into the idea of web apps and web sites requiring fundamentally different approaches, especially with the changes in the technologies we used to build them.

We’ve made the mistake in the past of framing problems as “either/or”, when in fact, the correct solution was “both!”:

  • you can either have a desktop site or a mobile site,
  • you can either have rich interactivity or accessibility,
  • you can either have a single page app or progressive enhancement.

We don’t have to choose. It might take more work, but we can have our web cake and eat it.

The false dichotomy that I’m most concerned about is the pernicious idea that offline functionality is somehow in opposition to progressive enhancement. Given the design of service workers, I find this proposition baffling.

This remark by Tom is the very definition of a false dichotomy:

People who say your site should work without JavaScript are actually hurting the people they think they’re helping.

He was also linking to Nolan’s article, which could indeed be read as saying that you should for offline instead of building with progressive enhancement. But I don’t think that’s what Nolan is saying (at least, I sincerely hope not). I think that Nolan is saying that we should prioritise the offline scenario over scenarios where JavaScript fails or isn’t available. That’s a completely reasonable thing to say. But the idea that we should build for the offline scenario instead of scenarios where JavaScript fails is absurdly reductionist. We don’t have to choose!

But I can certainly understand how developers might come to be believe that building a progressive web app is at odds with progressive enhancement. Having made a bunch of progressive web apps—Huffduffer, The Session, this site, I can testify that service workers work superbly as a layer on top of an existing site, but all the messaging around progressive web apps seems to fixated on the idea of the app-shell model (a small tweak to the single page app model, where a little bit of interface is available on the initial page load instead of requiring JavaScript for absolutely everything). Again, it’s entirely possible to reconcile the app-shell approach with server rendering and progressive enhancement, but nobody seems to be talking about that. Instead, all of the examples and demos are built with an assumption about JavaScript availability.

Assumptions are the problem. Whether it’s assumptions about screen size, assumptions about being able-bodied, assumptions about network connectivity, or assumptions about browser capabilities, I don’t think any assumptions are a safe bet. Now you might quite reasonably say that we have to make some assumptions when we’re building on the web, and you’d be right. But I think we should still aim to keep them to a minimum.

Tom’s tweet included a screenshot of this part of Nolan’s article:

As Benedict Evans has noted, the next billion people who are poised to come online will be using the internet almost exclusively through smartphones. And if Google’s plans with Android One are any indication, then we have a fairly good idea of what kind of devices the “next billion” will be using:

  • They’ll mostly be running Android.
  • They’ll have decent specs (1GB RAM, quad-core processors).
  • They’ll have an evergreen browser and WebView (Android 5+).
  • What they won’t have, however, is a reliable internet connection.

Those seem like a reasonable set of assumptions. But even there, things aren’t so simple. Will people really be using “an evergreen browser and WebView”? Millions of people use proxy browsers like Opera Mini, which means you can’t guarantee JavaScript availability beyond the initial page load. UC Browser—which can also run in proxy mode—is now the second most popular mobile browser in the world.

That’s just one nit-picky example, but what I’m getting at here is that it really isn’t safe to make any assumptions. When we must make assumptions, let’s try to make them a last resort.

And just to be clear here, I’m not saying that just because we can’t make assumptions about devices or browsers doesn’t mean that we can’t build rich interactive web apps that work offline. I’m saying that we can build rich interactive web apps that work offline and also work when JavaScript fails or isn’t supported.

You don’t have to choose between progressive enhancement and a single page app/progressive web app/app shell/other things with the word “app”.

Progressive enhancement is an architectural approach to building on the web. You don’t have to use it, but please try to remember that it is your choice to make. You can choose to build a web app using progressive enhancement or not—there is nothing inherent in the nature of the thing you’re building that precludes progressive enhancement.

Personally, I find progressive enhancement a sensible way to counteract any assumptions I might inadvertently make. Progressive enhancement increases the chances that the web site (or web app) I’m building is resilient to the kind of scenarios that I never would’ve predicted or anticipated.

That’s why I choose to use progressive enhancement …and build progressive web apps.

Extensible web components

Adam Onishi has written up his thoughts on web components and progressive enhancements, following on from a discussion we were having on Slack. He shares a lot of the same frustrations as I do.

Two years ago, I said:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

I still feel that way. In theory, web components are very exciting. In practice, web components are very worrying. The worrying aspect comes from the treatment of backwards compatibility.

It all comes down to the way custom elements work. When you make up a custom element, it’s basically a span.

<fancy-select></fancy-select>

Then, using JavaScript with ShadowDOM, templates, and the other specs that together make up the web components ecosystem, you turn that inert span-like element into something all-singing and dancing. That’s great if the browser supports those technologies, and the JavaScript executes successfully. But if either of those conditions aren’t met, what you’re left with is basically a span.

One of the proposed ways around this was to allow custom elements to extend existing elements (not just spans). The proposed syntax for this was an is attribute.

<select is="fancy-select">...</select>

Browser makers responded to this by saying “Nah, that’s too hard.”

To be honest, I had pretty much given up on the is functionality ever seeing the light of day, but Monica has rekindled my hope:

Still, I’m not holding my breath for this kind of declarative extensibility landing in browsers any time soon. Instead, a JavaScript-based way of extending existing existing elements is currently the only way of piggybacking on all the accessible behavioural goodies you get with native elements.

class FancySelect extends HTMLSelectElement

But this imperative approach fails completely if custom elements aren’t supported, or if the JavaScript fails to execute. Now you’re back to having spans.

The presentation on web components at the Progressive Web Apps Dev Summit referred to this JavaScript-based extensibility as “progressively enhancing what’s already available”, which is a bit of a stretch, given how completely it falls apart in older browsers. It was kind of a weird talk, to be honest. After fifteen minutes of talking about creating elements entirely from scratch, there was a minute or two devoted to the is attribute and extending existing elements …before carrying as though those two minutes never happened.

But even without any means of extending existing elements, it should still be possible to define custom elements that have some kind of fallback in non-supporting browsers:

<fancy-select>
 <select>...</select>
</fancy-select>

In that situation, you at least get a regular ol’ select element in older browsers (or in modern browsers before the JavaScript kicks in and uplifts the custom element).

Adam has a great example of this in his post:

I’ve been thinking of a gallery component lately, where you’d have a custom element, say <o-gallery> for want of a better example, and simply populate it with images you want to display, with custom elements and shadow DOM you can add all the rest, controls/layout etc. Markup would be something like:

<o-gallery>
 <img src="">
 <img src="">
 <img src="">
</o-gallery>

If none of the extra stuff loads, what do we get? Well you get 3 images on the page. You still get the content, but just none of the fancy interactivity.

Yes! This, in my opinion, is how we should be approaching the design of web components. This is what gets me excited about web components.

Then I look at pretty much all the examples of web components out there and my nervousness kicks in. Hardly any of them spare a thought for backwards-compatibility. Take a look, for example, at the entire contents of the body element for the Polymer Shop demo site:

<shop-app unresolved="">SHOP</shop-app>

This seems really odd to me, because I don’t think it’s a good way to “sell” a technology.

Compare service workers to web components.

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had.

Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets …probably nothing. It depends on how the web components have been designed.

It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain. That’s not the case with web components. Or at least not with the way they are currently being sold.

See, this is why I think it’s so important to put some effort into designing web components that have some kind of fallback. Those web components will work well and fail well.

Look at the way new elements are designed for HTML. Think of complex additions like canvas, audio, video, and picture. Each one has been designed with backwards-compatibility in mind—there’s always a way to provide fallback content.

Web components give us developers the same power that, up until now, only belonged to browser makers. Web components also give us developers the same responsibilities as browser makers. We should take that responsibility seriously.

Web components are supposed to be the poster child for The Extensible Web Manifesto. I’m all for an extensible web. But the way that web components are currently being built looks more like an endorsement of The Replaceable Web Manifesto. I’m not okay with a replaceable web.

Here’s hoping that my concerns won’t be dismissed as “piffle and tosh” again by the very people who should be thinking about these issues.