Tags: medium

276

sparkline

Programming CSS

There’s a worrying tendency for “real” programmers look down their noses at CSS. It’s just a declarative language, they point out, not a fully-featured programming language. Heck, it isn’t even a scripting language.

That may be true, but that doesn’t mean that CSS isn’t powerful. It’s just powerful in different ways to traditional languages.

Take CSS selectors, for example. At the most basic level, they work like conditional statments. Here’s a standard if statement:

if (condition) {
// code here
}

The condition needs to evaluate to true in order for the code in the curly braces to be executed. Sound familiar?

condition {
// styles here
}

That’s a very simple mapping, but what if the conditional statement is more complicated?

if (condition1 && condition2) {
// code here
}

Well, that’s what the decendant selector does:

condition1 condition2 {
// styles here
}

In fact, we can get even more specific than that by using the child combinator, the sibling combinator, and the adjacent sibling combinator:

  • condition1 > condition2
  • condition1 ~ condition2
  • condition2 + condition2

AND is just one part of Boolean logic. There’s also OR:

if (condition1 || condition2) {
// code here
}

In CSS, we use commas:

condition1, condition2 {
// styles here
}

We’ve even got the :not() pseudo-class to complete the set of Boolean possibilities. Once you add quantity queries into the mix, made possible by :nth-child and its ilk, CSS starts to look Turing complete. I’ve seen people build state machines using the adjacent sibling combinator and the :checked pseudo-class.

Anyway, my point here is that CSS selectors are really powerful. And yet, quite often we deliberately choose not to use that power. The entire raison d’être for OOCSS, BEM, and Smacss is to deliberately limit the power of selectors, restricting them to class selectors only.

On the face of it, this might seem like an odd choice. After all, we wouldn’t deliberately limit ourselves to a subset of a programming language, would we?

We would and we do. That’s what templating languages are for. Whether it’s PHP’s Smarty or Twig, or JavaScript’s Mustache, Nunjucks, or Handlebars, they all work by providing a deliberately small subset of features. Some pride themselves on being logic-less. If you find yourself trying to do something that the templating language doesn’t provide, that’s a good sign that you shouldn’t be trying to do it in the template at all; it should be in the controller.

So templating languages exist to enforce simplicity and ensure that the complexity happens somewhere else. It’s a similar story with BEM et al. If you find you can’t select something in the CSS, that’s a sign that you probably need to add another class name to the HTML. The complexity is confined to the markup in order to keep the CSS more straightforward, modular, and maintainable.

But let’s not forget that that’s a choice. It’s not that CSS in inherently incapable of executing complex conditions. Quite the opposite. It’s precisely because CSS selectors (and the cascade) are so powerful that we choose to put guard rails in place.

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

Food and music

Going from Iceland to Greece in a day gave me a mild bit of currency exchange culture shock. Iceland is crazy expensive, especially given the self-immolation of the pound right now. Greece is remarkably cheap. You can eat like a king for unreasonably reasonable prices.

For me, food is one of the great pleasures in life. Trying new kinds of food is one of my primary motivators for travelling. It’s fascinating to me to see the differences—and similarities—across cultures. In many ways, food is like a universal language, but a language that we all speak in different dialects.

Herring. A feast of lamb.

It’s a similar story with music. There’s a fundamental universality in music across cultures, but there’s also a vast gulf of differences.

On my first night in Reykjavik, I wound up at an Irish music session. I know, I know—I sound like such a cliché, going to a foreign country and immediately seeking out something familiar. But I had been invited along by a kind soul who got in touch through The Session after I posted my travel plans there. Luckily for me, there was a brand new session starting that very evening. I didn’t have an instrument, but someone very kindly lent me their banjo and I had a thoroughly enjoyable time playing along with the jigs and reels.

As an added bonus—and you really don’t get to hear this at most trad sessions—there was even a bit of Icelandic singing courtesy of Bára Grímsdóttir. I snatched a little sample of it.

A few nights later I was in a quiet, somewhat smokey tavern in Thessaloniki. There was no Irish music to be found, but the rembetika music played on gorgeous bouzoukis and baglamas was in full flow.

Conferencing

I just wrapped up my last speaking gig of the year. It came at the end of a streak of attending European conferences without speaking at any of them—quite a nice feeling!

I already mentioned that I was in Berlin for the (excellent) Indie Web Camp. That was immediately followed by a one-day Accessibility Club conference. It was really, really good.

I have to say, I was initially apprehensive when I saw the sheer amount of speakers on the schedule. I was worried that my attention couldn’t handle it all. But the talks were a mixture of shorter 20 minute presentations, and a few longer 40 minute presentations. That worked really well—the day fairly zipped by. And just in case you think it would hard to have an entire day devoted to accessibility, the breadth of talks was remarkably diverse. Hats off to a well-organised and well-executed event!

The next day was Beyond Tellerrand. This has my favourite conference format: two days; one track; curated; a mix of design and development (see also An Event Apart and Smashing Conference). Marc’s love and care shines through every pore of the event. I thoroughly enjoyed the talks, and the hanging out with lovely people.

Alas, I had to miss the final afternoon of Beyond Tellerrand to head home to Brighton. I needed to get back for FF Conf. It was excellent, as always. Remy and Julie really give it their all. Remy even stepped in to give a (great) talk himself this year, when a speaker couldn’t make it.

A week later, I went to Iceland for Material. I really enjoyed last year’s inaugural event, and if anything, this year’s topped it. I just love how eclectic and different the talks are, and yet it all weirdly hangs together in a thoughtfully curated way. (Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)

As well as sharing an organiser with Accessibility Club, Material had a similar format—keynote talks from invited presenters, interspersed with shorter talks by locals. The mix was great. I won’t even try to describe the range of topics. I’m not sure I could explain how a conference podium morphed into a bar at the end of one of the talks. I think the best description of Material would be to say it’s like the inside of Brian’s head. In a good way.

I was supposed to be back in Brighton for one night after Material, but the stormy weather kept myself and Jessica in Reykjavik for an extra night. Thanks to Brian’s hospitality, we had a bed for the night.

There followed a long travel day as we made our way from Reykjavik to Gatwick, and then straight on to Thessaloniki, where we spent five days even though we only had the clothes we packed for the brief trip to Iceland. (Yes, we went shopping.)

I was there to speak at Voxxed Days. These events happen in various locations around the world, and just a few weeks ago, I spoke at the one in Bristol. It was …different.

After experiencing so many lovingly crafted events—Accessibility Club, Beyond Tellerrand, FF Conf, and Material—I’m afraid that Voxxed Days Thessaloniki was quite a comedown. It’s not that it was corporate per se—I believe it’s organised by developers for developers—but it felt like it was for people who worked in corporate environments. There were multiple tracks (I’m really not a fan of that), and some great speakers on the line-up like Stephanie and Simona, but the atmosphere felt kind of grim in a David Brentian sort of way. It probably wasn’t helped by the cheeky chappie of an MC who referred to one of the speakers as “darling.”

Anyway, I spoke first thing on the first day and I didn’t end up sticking around long. Normally I don’t speak and run, but I didn’t fancy the vibe of the exhibitor hall with its booth-babesque sales teams. Voxxed Days doesn’t pay its speakers so I didn’t feel any great obligation to hang around. The magnificent food and rembetika music of Thessaloniki was calling.

I just got back from Greece, and that wraps up my conference attending (and speaking) for 2018. I’ve already got a couple of events lined up for 2019. I’m delighted to be speaking at the return of Colly’s New Adventures conference. I’m less delighted about preparing a brand new talk I promised—I’m really feeling the pressure to deliver the goods at such an auspicious event with an intimidatingly superb line-up of speakers.

I’m also going to be preparing a different all-new talk for An Event Apart Seattle in March. For once, I’m going to try to make it somewhat practical and talk about service workers. If you know of any other events that might want a presentation like that in 2019, drop me a line.

Perhaps I will see you in Nottingham or in Seattle. If you’re planning on going to New Adventures, use the discount code ADACTIO10 to get 10% of the price of the conference or workshop ticket. If you’re planning on going to An Event Apart, use the discount code AEAKEITH for $100 off.

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

Service workers and browser extensions

I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…

There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.

And that’s when I remember: browser extensions.

I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).

If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.

This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch clauses of fetch statements in your service worker.

So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.

From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”

I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.

I’m going to make a checklist. Then I’m going to use it …every time.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Declaration

I like the robustness that comes with declarative languages. I also like the power that comes with imperative languages. Best of all, I like having the choice.

Take the video and audio elements, for example. If you want, you can embed a video or audio file into a web page using a straightforward declaration in HTML.

<audio src="..." controls><!-- fallback goes here --></audio>

Straightaway, that covers 80%-90% of use cases. But if you need to do more—like, provide your own custom controls—there’s a corresponding API that’s exposed in JavaScript. Using that API, you can do everything that you can do with the HTML element, and a whole lot more besides.

It’s a similar story with animation. CSS provides plenty of animation power, but it’s limited in the events that can trigger the animations. That’s okay. There’s a corresponding JavaScript API that gives you more power. Again, the CSS declarations cover 80%-90% of use cases, but for anyone in that 10%-20%, the web animation API is there to help.

Client-side form validation is another good example. For most us, the HTML attributes—required, type, etc.—are probably enough most of the time.

<input type="email" required />

When we need more fine-grained control, there’s a validation API available in JavaScript (yes, yes, I know that the API itself is problematic, but you get the point).

I really like this design pattern. Cover 80% of the use cases with a declarative solution in HTML, but also provide an imperative alternative in JavaScript that gives more power. HTML5 has plenty of examples of this pattern. But I feel like the history of web standards has a few missed opportunities too.

Geolocation is a good example of an unbalanced feature. If you want to use it, you must use JavaScript. There is no declarative alternative. This doesn’t exist:

<input type="geolocation" />

That’s a shame. Anyone writing a form that asks for the user’s location—in order to submit that information to a server for processing—must write some JavaScript. That’s okay, I guess, but it’s always going to be that bit more fragile and error-prone compared to markup.

(And just in case you’re thinking of the fallback—which would be for the input element to be rendered as though its type value were text—and you think it’s ludicrous to expect users with non-supporting browsers to enter latitude and longitude coordinates by hand, I direct your attention to input type="color": in non-supporting browsers, it’s rendered as input type="text" and users are expected to enter colour values by hand.)

Geolocation is an interesting use case because it only works on HTTPS. There are quite a few JavaScript APIs that quite rightly require a secure context—like service workers—but I can’t think of a single HTML element or attribute that requires HTTPS (although that will soon change if we don’t act to stop plans to create a two-tier web). But that can’t have been the thinking behind geolocation being JavaScript only; when geolocation first shipped, it was available over HTTP connections too.

Anyway, that’s just one example. Like I said, it’s not that I’m in favour of declarative solutions instead of imperative ones; I strongly favour the choice offered by providing declarative solutions as well as imperative ones.

In recent years there’s been a push to expose low-level browser features to developers. They’re inevitably exposed as JavaScript APIs. In most cases, that makes total sense. I can’t really imagine a declarative way of accessing the fetch or cache APIs, for example. But I think we should be careful that it doesn’t become the only way of exposing new browser features. I think that, wherever possible, the design pattern of exposing new features declaratively and imperatively offers the best of the both worlds—ease of use for the simple use cases, and power for the more complex use cases.

Previously, it was up to browser makers to think about these things. But now, with the advent of web components, we developers are gaining that same level of power and responsibility. So if you’re making a web component that you’re hoping other people will also use, maybe it’s worth keeping this design pattern in mind: allow authors to configure the functionality of the component using HTML attributes and JavaScript methods.

Preparing a conference talk

I gave a talk at the State Of The Browser conference in London earlier this month. It was a 25 minute spiel called The Web Is Agreement.

While I was putting the talk together, I posted some pictures of my talk preparation process. People seemed to be quite interested in that peek behind the curtain, so I thought I’d jot down the process I used.

There are two aspects to preparing a talk: the content and the presentation. I like to keep the preparation of those two parts separate. It’s kind of like writing: instead of writing and editing at the same time, it’s more productive to write any old crap first (to get it out of your head) and then go back and edit—“write drunk and edit sober”. Separating out those two mindsets allows you to concentrate on the task at hand.

So, to begin with, I’m not thinking about how I’m going to present the material at all. I’m only concerned with what I want to say.

When it comes to preparing the subject matter for a talk, step number zero is to banish your inner critic. You know who I mean. That little asshole with the sneering voice that says things like “you’re not qualified to talk about this” or “everything has already been said.” Find a way to realise that this demon is a) speaking from inside your head and b) not real. Maybe try drawing your inner critic. Ridiculous? Yes. Yes, it is.

Alright, time to start. There’s nothing more intimidating than a blank slidedeck, except maybe a blank Photoshop file, or a blank word processing document. In each of those cases, I find that starting with software is rarely a good idea. Paper is your friend.

I get a piece of A3 paper and start scribbling out a mind map. “Mind map” is a somewhat grandiose term for what is effectively a lo-fi crazy wall.

Mind mapping.
Step 1

The idea here is to get everything out of my head. Don’t self-censor. At this stage, there are no bad ideas. This is a “yes, and…” exercise, not a “no, but…” exercise. Divergent, not convergent.

Not everything will make it into the final talk. That’s okay. In fact, I often find that there’s one thing that I’m really attached to, that I’m certain will be in the talk, that doesn’t make the cut. Kill your darlings.

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

For this particular talk, I knew that the subject matter would be something to do with web standards. I brain-dumped every vague thought I had about standards in general.

The next step is what I call chunking. I start to group related ideas together. Then I give a label to each of these chunks. Personally, I like to use a post-it note per chunk. I put one word or phrase on the post-it note, but it could just as easily be a doodle. The important thing is that you know what the word or doodle represents. Each chunk should represent a self-contained little topic that you might talk about for 3 to 5 minutes.

Chunking.
Step 2

At this point, I can start thinking about the structure of the talk: “Should I start with this topic? Or should I put that in the middle?” The cost of changing my mind is minimal—I’m just moving post-it notes around.

With topics broken down into chunks like this, I can flesh out each one. The nice thing about this is that I’ve taken one big overwhelming task—prepare a presentation!—and I’ve broken it down into smaller, more manageable tasks. I can take a random post-it note and set myself, say, ten or fifteen minutes to jot down an explanation of it.

The explanation could just be bullet points. For this particular talk, I decided to write full sentences.

Expanding.
Step 3

Even though, in this case, I was writing out my thoughts word for word, I still kept the topics in separate files. That way, I can still move things around easily.

Crafting the narrative structure of a talk is the part I find most challenging …but it’s also the most rewarding. By having the content chunked up like this, I can experiment with different structures. I like to try out different narrative techniques from books and films, like say, flashback: find the most exciting part of the talk; start with that, and then give the backstory that led up to it. That’s just one example. There are many possible narrative stuctures.

What I definitely don’t do is enact the advice that everyone is given for their college presentations:

  1. say what you’re going to say,
  2. say it, and
  3. recap what you’ve said.

To me, that’s the equivalent of showing an audience the trailer for a film right before watching the film …and then reading a review of the film right after watching it. Just play the film! Give the audience some credit—assume the audience has no knowledge but infinite intelligence.

Oh, and there’s one easy solution to cracking the narrative problem: make a list. If you’ve got 7 chunks, you can always give a talk on “Seven things about whatever” …but it’s a bit of a cop-out. I mean, most films have a three-act structure, but they don’t start the film by telling the audience that, and they don’t point out when one act finishes and another begins. I think it’s much more satisfying—albeit more challenging—to find a way to segue from chunk to chunk.

Finding the narrative thread is tricky work, but at least, by this point, it’s its own separate task: if I had tried to figure out the narrative thread at the start of the process, or even when I was chunking things out, it would’ve been overwhelming. Now it’s just the next task in my to-do list.

I suppose, at this point, I might as well make some slides.

Slides.
Step 4

I’m not trying to be dismissive of slides—I think having nice slides is a very good thing. But I do think that they can become the “busy work” of preparing a presentation. If I start on the slides too soon, I find they’ll take up all my time. I think it’s far more important to devote time to the content and structure of the talk. The slides should illustrate the talk …but the slides are not the talk.

If you don’t think of the slides as being subservient to the talk, there’s a danger that you’ll end up with a slidedeck that’s more for the speaker’s benefit than the audience’s.

It’s all too easy to use the slides as a defence mechanism. You’re in a room full of people looking towards you. It’s perfectly reasonable for your brain to scream, “Don’t look at me! Look at the slides!” But taken too far, that can be interpreted as “Don’t listen to me!”

For this particular talk, there were moments when I wanted to make sure the audience was focused on some key point I was making. I decided that having no slide at all was the best way of driving home my point.

But slidedeck style is quite a personal thing, so use whatever works for you.

It’s a similar story with presentation style. Apart from some general good advice—like, speak clearly—the way you present should be as unique as you are. I have just one piece of advice, and it’s this: read Demystifying Public Speaking by Lara Hogan—it’s really, really good!

(I had to apologise to Lara last time I saw her, because I really wanted her to sign my copy of her book …but I didn’t have it, because it’s easily the book I’ve loaned out to other people the most.)

I did a good few run-throughs of my talk. There were a few sentences that sounded fine written down, but were really clumsy to say out loud. It reminded me of what Harrison Ford told George Lucas during the filming of Star Wars: “You can type this shit, George, but you can’t say it.”

I gave a final run-through at work to some of my Clearleft colleagues. To be honest, I find that more nerve-wracking than speaking on a stage in front of a big room full of strangers. I think it’s something to do with the presentation of self.

Finally, the day of the conference rolled around, and I was feeling pretty comfortable with my material. I’m pretty happy with how it turned out. You can read The Web Is Agreement, and you can look at the slides, but as with any conference talk, you kinda had to be there.

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Several people are writing

Anne Gibson writes:

It sounds easy to make writing a habit, but like every other habit that doesn’t involve addictive substances like nicotine or dopamine it’s hard to start and easy to quit.

Alice Bartlett writes:

Anyway, here we are, on my blog, or in your RSS reader. I think I’ll do weaknotes. Some collections of notes. Sometimes. Not very well written probably. Generally written with the urgency of someone who is waiting for a baby wake up.

Patrick Rhone writes:

Bottom line; please place any idea worth more than 280 characters and the value Twitter places on them (which is zero) on a blog that you own and/or can easily take your important/valuable/life-changing ideas with you and make them easy for others to read and share.

Sara Soueidan writes:

What you write might help someone understand a concept that you may think has been covered enough before. We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from. And if no one reads your article, then that’s also okay. That voice telling you that people are just sitting somewhere watching our every step and judging us based on the popularity of our writing is a big fat pathetic attention-needing liar.

Laura Kalbag writes:

The web can be used to find common connections with folks you find interesting, and who don’t make you feel like so much of a weirdo. It’d be nice to be able to do this in a safe space that is not being surveilled.

Owning your own content, and publishing to a space you own can break through some of these barriers. Sharing your own weird scraps on your own site makes you easier to find by like-minded folks.

Brendan Dawes writes:

At times I think “will anyone reads this, does anyone care?”, but I always publish it anyway — and that’s for two reasons. First it’s a place for me to find stuff I may have forgotten how to do. Secondly, whilst some of this stuff is seemingly super-niche, if one person finds it helpful out there on the web, then that’s good enough for me. After all I’ve lost count of how many times I’ve read similar posts that have helped me out.

Robin Rendle writes:

My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains a weird thing you struggled with on the Internet, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this thing then be sure to jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.

Khoi Vinh writes:

Maybe you feel more comfortable writing in short, concise bullets than at protracted, grandiose length. Or maybe you feel more at ease with sarcasm and dry wit than with sober, exhaustive argumentation. Or perhaps you prefer to knock out a solitary first draft and never look back rather than polishing and tweaking endlessly. Whatever the approach, if you can do the work to find a genuine passion for writing, what a powerful tool you’ll have.

Amber Wilson writes:

I want to finally begin writing about psychology. A friend of mine shared his opinion that writing about this is probably best left to experts. I tried to tell him I think that people should write about whatever they want. He argued that whatever he could write about psychology has probably already been written about a thousand times. I told him that I’m going to be writer number 1001, and I’m going to write something great that nobody has written before.

Austin Kleon writes:

Maybe I’m weird, but it just feels good. It feels good to reclaim my turf. It feels good to have a spot to think out loud in public where people aren’t spitting and shitting all over the place.

Tim Kadlec writes:

I write to understand and remember. Sometimes that will be interesting to others, often it won’t be.

But it’s going to happen. Here, on my own site.

You write…

The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Sapiens

I finally got around to reading Sapiens by Yuval Noah Harari. It’s one of those books that I kept hearing about from smart people whose opinions I respect. But I have to say, my reaction to the book reminded me of when I read Matt Ridley’s The Rational Optimist:

It was an exasperating read.

At first, I found the book to be a rollicking good read. It told the sweep of history in an engaging way, backed up with footnotes and references to prime sources. But then the author transitions from relaying facts to taking flights of fancy without making any distinction between the two (the only “tell” is that the references dry up).

Just as Matt Ridley had personal bugbears that interrupted the flow of The Rational Optimist, Yuval Noah Harari has fixated on some ideas that make a mess of the narrative arc of Sapiens. In particular, he believes that the agricultural revolution was, as he describes it, “history’s biggest fraud.” In the absence of any recorded evidence for this, he instead provides idyllic descriptions of the hunter-gatherer lifestyle that have as much foundation in reality as the paleo diet.

When the book avoids that particular historical conspiracy theory, it fares better. But even then, the author seems to think he’s providing genuinely new insights into matters of religion, economics, and purpose, when in fact, he’s repeating the kind of “college thoughts” that have been voiced by anyone who’s ever smoked a spliff.

I know I’m making it sound terrible, and it’s not terrible. It’s just …generally not that great. And when it is great, it only makes the other parts all the more frustrating. There’s a really good book in Sapiens, but unfortunately it’s interspersed with some pretty bad editorialising. I have to agree with Galen Strawson’s review:

Much of Sapiens is extremely interesting, and it is often well expressed. As one reads on, however, the attractive features of the book are overwhelmed by carelessness, exaggeration and sensationalism.

Towards the end of Sapiens, Yuval Noah Harari casts his eye on our present-day world and starts to speculate on the future. This is the point when I almost gave myself an injury with the amount of eye-rolling I was doing. His ideas on technology, computers, and even science fiction are embarrassingly childish and incomplete. And the bad news is that his subsequent books—Home Deus and 21 Lessons For The 21st Century—are entirely speculations about humanity and technology. I won’t be touching those with all the ten foot barge poles in the world.

In short, although there is much to enjoy in Sapiens, particularly in the first few chapters, I can’t recommend it.

If you’re looking for a really good book on the fascinating history of our species, read A Brief History of Everyone Who Ever Lived by Adam Rutherford . That’s one I can recommend without reservation.

Flexibility

Over on A List Apart, you can read the first chapter from Tim’s new book, Flexible Typesetting.

I was lucky enough to get an advance preview copy and this book is ticking all my boxes. I mean, I knew I would love all the type nerdery in the book, but there’s a bigger picture too. In chapter two, Tim makes this provacative statement:

Typography is now optional. That means it’s okay for people to opt out.

That’s an uncomfortable truth for designers and developers, but it gets to the heart of what makes the web so great:

Of course typography is valuable. Typography may now be optional, but that doesn’t mean it’s worthless. Typographic choices contribute to a text’s meaning. But on the web, text itself (including its structural markup) matters most, and presentational instructions like typography take a back seat. Text loads first; typography comes later. Readers are free to ignore typographic suggestions, and often prefer to. Services like Instapaper, Pocket, and Safari’s Reader View are popular partly because readers like their text the way they like it.

What Tim describes there isn’t a cause for frustration or despair—it’s a cause for celebration. When we try to treat the web as a fixed medium where we can dictate the terms that people must abide by, we’re doing them (and the web) a disservice. Instead of treating web design as a pre-made contract drawn up by the designer and presented to the user as a fait accompli, it is more materially honest to treat web design as a conversation between designer and user. Both parties should have a say.

Or as Tim so perfectly puts it in Flexible Typesetting:

Readers are typographers, too.

Console methods

Whenever I create a fetch event inside a service worker, my code roughly follows the same pattern. There’s a then clause which gets executed if the fetch is successful, and a catch clause in case anything goes wrong:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( fetchError => {
    // Boo! It failed.
});

In my book—Going Offline—I’m at pains to point out that those arguments being passed into each clause are yours to name. In this example I’ve called them fetchResponse and fetchError but you can call them anything you want.

I always do something with the fetchResponse inside the then clause—either I want to return the response or put it in a cache.

But I rarely do anything with fetchError. Because of that, I’ve sometimes made the mistake of leaving it out completely:

fetch( request)
.then( fetchResponse => {
    // Yay! It worked.
})
.catch( () => {
    // Boo! It failed.
});

Don’t do that. I think there’s some talk of making the error argument optional, but for now, some browsers will get upset if it’s not there.

So always include that argument, whether you call it fetchError or anything else. And seeing as it’s an error, this might be a legitimate case for outputing it to the browser’s console, even in production code.

And yes, you can output to the console from a service worker. Even though a service worker can’t access anything relating to the document object, you can still make use of window.console, known to its friends as console for short.

My muscle memory when it comes to sending something to the console is to use console.log:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.log(fetchError);
});

But in this case, the console.error method is more appropriate:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.error(fetchError);
});

Now when there’s a connectivity problem, anyone with a console window open will see the error displayed bold and red.

If that seems a bit strident to you, there’s always console.warn which will still make the output stand out, but without being quite so alarmist:

fetch( request)
.then( fetchResponse => {
    return fetchResponse;
})
.catch( fetchError => {
    console.warn(fetchError);
});

That said, in this case, console.error feels like the right choice. After all, it is technically an error.