Journal tags: v

870

sparkline

Speaking online

I really, really missed speaking at conferences in 2020. I managed to squeeze in just one meatspace presentation before everything shut down. That was in Nottingham, where myself and Remy reprised our double-bill talk, How We Built The World Wide Web In Five Days.

That was pretty much all the travelling I did in 2020, apart from a joyous jaunt to Galway to celebrate my birthday shortly before the Nottingham trip. It’s kind of hilarious to look at a map of the entirety of my travel in 2020 compared to previous years.

Mind you, one of my goals for 2020 was to reduce my carbon footprint. Mission well and truly accomplished there.

But even when travel was out of the question, conference speaking wasn’t entirely off the table. I gave a brand new talk at An Event Apart Online Together: Front-End Focus in August. It was called Design Principles For The Web and I’ve just published a transcript of the presentation. I’m really pleased with how it turned out and I think it works okay as an article as well as a talk. Have a read and see what you think (or you can listen to the audio if you prefer).

Giving a talk online is …weird. It’s very different from public speaking. The public is theoretically there but you feel like you’re just talking at your computer screen. If anything, it’s more like recording a podcast than giving a talk.

Luckily for me, I like recording podcasts. So I’m going to be doing a new online talk this year. It will be at An Event Apart’s Spring Summit which runs from April 19th to 21st. Tickets are available now.

I have a pretty good idea what I’m going to talk about. Web stuff, obviously, but maybe a big picture overview this time: the past, present, and future of the web.

Time to prepare a conference talk.

2020

In 2020, I didn’t have the honour and privilege of speaking at An Event Apart in places like Seattle, Boston, and Minneapolis. I didn’t experience that rush that comes from sharing ideas with a roomful of people, getting them excited, making them laugh, sparking thoughts. I didn’t enjoy the wonderful and stimulating conversations with my peers that happen in the corridors, or over lunch, or at an after-party. I didn’t have a blast catching up with old friends or making new ones.

But the States wasn’t the only country I didn’t travel to. Closer to home, I didn’t have the opportunity to take the Eurostar and connecting trains to cities like Cologne, Lisbon, and Stockholm. I didn’t sample the food and drink of different countries.

In the summer, I didn’t travel to the west coast of Ireland for the second in year in a row for the annual Willie Clancy festival of traditional Irish music. I didn’t spend each day completely surrounded by music. I didn’t play in some great sessions. I didn’t hear some fantastic and inspiring musicians.

Back here in Brighton, I didn’t go to the session in The Jolly Brewer every Wednesday evening and get lost in the tunes. I didn’t experience that wonderful feeling of making music together and having a pint or two. And every second Sunday afternoon, I didn’t pop along to The Bugle for more jigs and reels.

I didn’t walk into work most days, arrive at the Clearleft studio, and make a nice cup of coffee while chit-chatting with my co-workers. I didn’t get pulled into fascinating conversations about design and development that spontaneously bubble up when you’re in the same space as talented folks.

Every few months, I didn’t get a haircut.

Throughout the year, I didn’t make any weekend trips back to Ireland to visit my mother.

2020 gave me a lot of free time. I used that time to not write a book. And with all that extra time on my hands, I read fewer books than I had read in 2019. Oh, and on the side, I didn’t learn a new programming language. I didn’t discover an enthusiasm for exercise. I didn’t get out of the house and go for a brisk walk on most days. I didn’t start each day prepping my sourdough.

But I did stay at home, thereby slowing the spread of a deadly infectious disease. I’m proud of that.

I did play mandolin. I did talk to my co-workers through a screen. I did eat very well—and very local and seasonal. I did watch lots of television programmes and films. I got by. Sometimes I even took pleasure in this newly-enforced lifestyle.

I made it through 2020. And so did you. That’s an achievement worth celebrating—congratulations!

Let’s see what 2021 doesn’t bring.

Books I read in 2020

I only read twenty books this year. Considering the ample amount of free time I had, that’s not great. But I’m not going to beat myself up about it. Yes, I may have spent more time watching television than reading, but I’m cutting myself some slack. It was 2020, for crying out loud.

Anyway, here’s my annual round-up with reviews. Anything with three stars is good. Four stars is really good. Five stars is practically unheard of. As usual, I tried to get an equal balance of fiction and non-fiction.

Raven Stratagem by Yoon Ha Lee

★★★☆☆

An enjoyable sequal to Ninefox Gambit. There are some convoluted politics but that all seems positively straightforward after the brain-bending calendrical warfare introduced in the first book.

The Human Use Of Human Beings: Cybernetics And Society by Norbert Wiener

★★★☆☆

The ur-text on systems and feedback. Reading it now is like reading a historical artifact but many of the ideas are timeless. It’s a bit dense in parts and it tries to cover life, the universe and everything, but when you remember that it was written in 1950, it’s clearly visionary.

The Word For World Is Forest by Ursula K. Le Guin

★★★☆☆

Simultaneously a ripping yarn and a spiritual meditation. It’s Vietnam and the environmental movement rolled into one (like what Avatar attempted, but this actually works).

Abolish Silicon Valley by Wendy Liu

★★★★☆

Here’s my full review.

A Short History Of Irish Traditional Music by Gearóid Ó hAllmhuráin

★★☆☆☆

A perfectly fine and accurate history of the music, but it’s a bit like reading Wikipedia. Still, it was quite the ego boost to see The Session listed in the appendix.

Machines Like Me by Ian McEwan

★★★☆☆

McEwan’s first foray into science fiction is a good tale but a little clumsily told. It’s like he really wants to show how much research he put into his alternative history. There are moments when characters practically turn to the camera to say, “Imagine how the world would’ve turned out if…” It’s far from McEwan’s best but even when he’s not on top form, his writing is damn good.

The Fabric Of Reality by David Deutsch

★★★☆☆

I’ve attempted to read this before. I may have even read it all before and had everything just leak out of my head. The problem is with me, not David Deutsch who does a fine job of making complex ideas approachable. This is like a unified theory of everything.

Helliconia Winter by Brian Aldiss

★★★☆☆

The third and final part of Aldiss’s epic is just as enjoyable as the previous two. The characters aren’t the main attraction here. It’s all about the planetary ballet.

Uncanny Valley by Anna Wiener

★★★★☆

A terrific memoir. It’s open and honest, and just snarky enough when it needs to be.

A Wizard Of Earthsea, The Tombs Of Atuan, and The Farthest Shore by Ursula K. Le Guin

★★★★☆

There’s a real pleasure in finally reading books that you should’ve read years ago. I can only imagine how wonderful it would’ve been to read these as a teenager. It’s an immersive world but there’s something melancholy about the writing that makes the experience of reading less escapist and more haunting.

Superior: The Return of Race Science by Angela Saini

★★★★★

Absolutely superb! I liked Angela Saini’s previous book, Inferior, but I loved this. It’s a harrowing read at times, but written with incredible clarity and empathy. I can’t recommend this highly enough.

Purple People by Kate Bulpitt

★★★★☆

Full disclosure: Kate is a friend of mine, so I probably can’t evaluate her book in a disinterested way. That said, I enjoyed the heck out of this and I think you will too. It’s very hard to classify and I think that’s what makes it so enjoyable. Technically, it’s sci-fi I suppose—an alternative history tale, probably—but it doesn’t feel like it. It’s all about the characters, and they’re all vividly realised. Honestly, I’m not sure how best to describe it—other then it being like the inside of Kate’s head—but the description of it being “a jolly dystopia” comes close. Take a chance and give it a go.

How to Argue With a Racist: History, Science, Race and Reality by Adam Rutherford

★★★☆☆

Good stuff from Adam Rutherford, though not his best. If I hadn’t already read Angela Saini’s Superior I might’ve rated this higher, but it pales somewhat by comparison. Still, it was interesting to see the same subject matter tackled in two different ways.

Agency by William Gibson

★★☆☆☆

There’s nothing particularly wrong with Agency, but there’s nothing particularly great about it either. It’s just there. Maybe I’m being overly harsh because the first book, The Peripheral, was absolutely brilliant. This reminded me of reading Gibson’s Spook Country, which left me equally unimpressed. That book was sandwiched between the brilliant Pattern Recognition and the equally brilliant Zero History. That bodes well for the forthcoming third book in this series. This second book just feels like filler.

Last Night’s Fun: In And Out Of Time With Irish Music by Ciaran Carson

★★★☆☆

It’s hard to describe this book. Memoir? Meditation? Blog? I kind of like that about it, but I can see how it divides opinion. Some people love it. Some people hate it. I thought it was enjoyable enough. But it doesn’t matter what I think. This book is doing its own thing.

Revenant Gun by Yoon Ha Lee

★★★☆☆

The third book in the Machineries of Empire series has much less befuddlement. It’s even downright humourous in places. If you liked Ninefox Gambit and Raven Strategem, you’ll enjoy this too.

A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster by Rebecca Solnit

★★★☆☆

The central thesis of this book is refuting the Hobbesian view of humanity as being one crisis away from breakdown. I feel like that argument was made more strongly in Critical Mass: How One Thing Leads to Another by Philip Ball. But where this book shines is in its vivid description of past catastrophes and their aftermaths: the San Francisco fire; the Halifax explosion; the Mexico City earthquake; and the culmination with Katrina hitting New Orleans. I was less keen on the more blog-like personal musings but overall, this is well worth reading.

Blindsight by Peter Watts

★★☆☆☆

I like a good tale of first contact, and I had heard that this one had a good twist on the Fermi paradox. But it felt a bit like a short story stretched to the length of a novel. It would make for a good Twilight Zone episode but it didn’t sustain my interest.

This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone

I’m still reading this Hugo-winning novella and enjoying it so far.


Alright, time to wrap up this look back at the books I read in 2020 and pick my favourites: one fiction and one non-fiction.

My favourite non-fiction book of the year was easily Superior by Angela Saini. Read it. It’s superb.

What about fiction? Hmm …this is tricky.

You know what? I’m going to go for Purple People by Kate Bulpitt. Yes, she’s a friend (“it’s a fix!”) but it genuinely made an impression on me: it was an enjoyable romp while I was reading it, and it stayed with me afterwards too.

Head on over to Bookshop and pick up a copy.

SVGs in dark mode

I added a dark mode to my site last year. Since then I’ve been idly toying with the addition of a dark mode to The Session too.

As with this site, the key to adding a dark mode was switching to custom properties for color and background-color declarations. But my plans kept getting derailed by the sheet music on the site. The sheet music is delivered as SVG generated by ABCJS which hard-codes the colour in stroke and fill attributes:

fill="#000000" stroke="#000000"

When I was describing CSS recently I mentioned the high specifity of inline styles:

Whereas external CSS and embedded CSS don’t have any effect on specificity, inline styles are positively radioactive with specificity.

Given that harsh fact of life, I figured it would be nigh-on impossible to over-ride the colour of the sheetmusic. But then I realised I was an idiot.

The stroke and fill attributes in SVG are presentational but they aren’t inline styles. They’re attributes. They have no affect on specifity. I can easily over-ride them in an external style sheet.

In fact, if I had actually remembered what I wrote when I was adding a dark mode to adactio.com, I could’ve saved myself some time:

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

I was able to do something similar on The Session. I used the handy currentColor keyword in CSS so that the sheet music matched the colour of the text:

svg path {
  fill: currentColor;
}
svg path:not(stroke="none") {
  stroke: currentColor;
}

Et voila! I now had light-on-dark sheet music for The Session’s dark mode all wrapped up in a prefers-color-scheme: dark media query.

I pushed out out the new feature and started getting feedback. It could be best summarised as “Thanks. I hate it.”

It turns out that while people were perfectly fine with a dark mode that inverts the colours of text, it felt really weird and icky to do the same with sheet music.

On the one hand, this seems odd. After all, sheet music is a writing system like any other. If you’re fine with light text on a dark background, why doesn’t that hold for light sheet music on a dark background?

But on the other hand, sheet music is also like an image. And we don’t invert the colours of our images when we add a dark mode to our CSS.

With that in mind, I went back to the drawing board and this time treated the sheet music SVGs as being intrinsicly dark-on-light, rather than a stylistic choice. It meant a few more CSS rules, but I’m happy with the final result. You can see it in action by visiting a tune page and toggling your device’s “appearance” settings between light and dark.

If you’re a member of The Session, I also added a toggle switch to your member profile so you can choose dark or light mode regardless of your device settings.

Web Audio API weirdness on iOS

I told you about how I’m using the Web Audio API on The Session to generate synthesised audio of each tune setting. I also said:

Except for some weirdness on iOS that I had to fix.

Here’s that weirdness…

Let me start by saying that this isn’t anything to do with requiring a user interaction (the Web Audio API insists on some kind of user interaction to prevent developers from having auto-playing sound on websites). All of my code related to the Web Audio API is inside a click event handler. This is a different kind of weirdness.

First of all, I noticed that if you pressed play on the audio player when your iOS device is on mute, then you don’t hear any audio. Seems logical, right? Except if using the same device, still set to mute, you press play on a video or audio element, the sound plays just fine. You can confirm this by going to Huffduffer and pressing play on any of the audio elements there, even when your iOS device is set on mute.

So it seems that iOS has different criteria for the Web Audio API than it does for audio or video. Except it isn’t quite that straightforward.

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

Following my theory that the browser needs a “kick” to get into the right frame of mind for the Web Audio API, I resorted to a messy little hack.

In the event handler for the audio player, I generate the “kick” by playing a second of silence using the JavaScript equivalent of the audio element:

var audio = new Audio('1-second-of-silence.mp3');
audio.play();

I’m not proud of that. It’s so hacky that I’ve even wrapped the code in some user-agent sniffing on the server, and I never do user-agent sniffing!

Still, if you ever find yourself getting weird but inconsistent behaviour on iOS using the Web Audio API, this nasty little hack could help.

Audio

I spent the last couple of weekends rolling out a new feature on The Session. It involves playing audio in a web page. No big deal these days, right? But the history involves some old file formats…

The first venerable format is ABC notation. File extension: .abc, mime type: text/vnd.abc. It’s an ingenious text format for musical notation using ASCII. The metadata of the piece of music is defined in JSON-like key/value pairs. Then the contents are encoded with letters: A, B, C, etc. Uppercase and lowercase denote different octaves. Numbers can be used for note lengths.

The format was created by Chris Walshaw in 1997 when dial-up was the norm. With ABC, people were able to swap tunes on email lists or bulletin boards without transferring weighty image or sound files. If you had ABC software on your computer, you could convert that lightweight text file into sheet music …or audio.

That brings me to the second old format: midi files. File extension: .mid, mime-type: audio/midi. Like ABC, it’s a lightweight format for encoding the instructions for music instead of the music itself.

Think of it like SVG: instead of storing the final pixels of an image, SVG stores the instructions for drawing the image instead. The instructions in a midi file are like “play this note for this long on this instrument.” Again, as with ABC, you need some software to turn the instructions into sound.

There was a time when lots of software could play midi files. Quicktime on the Mac, for example. You could even embed midi files in web pages. I mean literally embed them …with the embed element. No Geocities page was complete without an autoplaying midi file.

On The Session, people submit tunes in ABC format. Then, using the amazing ABCJS JavaScript library, the ABC is turned into SVG on the fly! For years I’ve also offered midi files, generated on the server from the ABC notation.

But times have changed. These days it’s hard to find software that plays midi files. Quicktime doesn’t do it anymore. And you’d need to go to the app store on iOS to find a midi file player. It’s time to phase out the midi files on The Session.

I still want to provide automatically-generated audio though. Fortunately ABCJS gives me a way to do this. But instead of using the old technology of midi files, it uses a more modern browser feature: the Web Audio API.

The end result sounds like a midi file, but the underlying technique is more like a synthesiser. There’s a separate mp3 file for each note. The JavaScript figures out how long each “sample” needs to be played for, strings them all together, and outputs them with Web Audio. So you’ve got cutting-edge browser technology recreating a much older file format. Paul Rosen—the creator of ABCJS—has a presentation explaining how it all works under the hood.

Not only is there a separate short mp3 file for each note in seven octaves, but if you want the sound of a different instrument, you need samples for all seven octaves in that instrument. They’re called soundfonts.

Paul provides soundfonts for ABCJS. It’s a repo that was forked from this repo from Benjamin Gleitzman. And here’s where it gets small worldy…

The reason why Benjamin has a repo of soundfonts is because he needed to create midi-like audio in the browser. He wanted to do this for a project on September 28th and 29th, 2013 …at Science Hack Day San Francisco!

I was there too—working on my own audio-related hack—and I remember the excellent (and winning) hack that Benjamin worked on. It was called Symphony of Satellites and it’s still online along with the promo video. Here’s Benjamin’s post-hackday write-up from seven years ago.

It’s rare that the worlds of the web and Irish music cross over. When I got to meet Paul—creator of ABCJS—at a web conference a couple of years ago it kind of blew my mind. Last weekend when I set out to dabble with a feature on The Session, I certainly didn’t expect to stumble on a connection to Science Hack Day! (Aside: the first Science Hack Day was ten years ago—yowzers!)

Anyway, I was able to get that audio playback working on The Session. Except for some weirdness on iOS that I had to fix. But that’s a hack for another day.

Ampvisory

I was very inspired by something Terence Eden wrote on his blog last year. A report from the AMP Advisory Committee Meeting:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP.

Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.

So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:

Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.

The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.

That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!

But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?

The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.

That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.

I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:

An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.

The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.

Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.

So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.

I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).

The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.

I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.

That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.

After all, AMP is a collection of web components. Maybe.

Cascading Style Sheets

There are three ways—that I know of—to associate styles with markup.

External CSS

This is probably the most common. Using a link element with a rel value of “stylesheet”, you point to a URL using the href attribute. That URL is a style sheet that is applied to the current document (“the relationship of the linked resource it is that is a ‘stylesheet’ for the current document”).

<link rel="stylesheet" href="/path/to/styles.css">

In theory you could associate a style sheet with a document using an HTTP header, but I don’t think many browsers support this in practice.

You can also pull in external style sheets using the @import declaration in CSS itself, as long as the @import rule is declared at the start, before any other styles.

@import url('/path/to/more-styles.css');

When you use link rel="stylesheet" to apply styles, it’s a blocking request: the browser will fetch the style sheet before rendering the HTML. It needs to know how the HTML elements will be painted to the screen so there’s no point rendering the HTML until the CSS is parsed.

Embedded CSS

You can also place CSS rules inside a style element directly in the document. This is usually in the head of the document.

<style>
element {
    property: value;
}
</style>

When you embed CSS in the head of a document like this, there is no network request like there would be with external style sheets so there’s no render-blocking behaviour.

You can put any CSS inside the style element, which means that you could use embedded CSS to load external CSS using an @import statement (as long as that @import statement appears right at the start).

<style>
@import url('/path/to/more-styles.css');
element {
    property: value;
}
</style>

But then you’re back to having a network request.

Inline CSS

Using the style attribute you can apply CSS rules directly to an element. This is a universal attribute. It can be used on any HTML element. That doesn’t necessarily mean that the styles will work, but your markup is never invalidated by the presence of the style attribute.

<element style="property: value">
</element>

Whereas external CSS and embedded CSS don’t have any effect on specificity, inline styles are positively radioactive with specificity. Any styles applied this way are almost certain to over-ride any external or embedded styles.

You can also apply styles using JavaScript and the Document Object Model.

element.style.property = 'value';

Using the DOM style object this way is equivalent to inline styles. The radioactive specificity applies here too.

Style declarations specified in external style sheets or embedded CSS follow the rules of the cascade. Values can be over-ridden depending on the order they appear in. Combined with the separate-but-related rules for specificity, this can be very powerful. But if you don’t understand how the cascade and specificity work then the results can be unexpected, leading to frustration. In that situation, inline styles look very appealing—there’s no cascade and everything has equal specificity. But using inline styles means foregoing a lot of power—you’d be ditching the C in CSS.

A common technique for web performance is to favour embedded CSS over external CSS in order to avoid the extra network request (at least for the first visit—there are clever techniques for caching an external style sheet once the HTML has already loaded). This is commonly referred to as inlining your CSS. But really it should be called embedding your CSS.

This language mix-up is not a hill I’m going to die on (that hill would be referring to blog posts as blogs) but I thought it was worth pointing out.

Lists

We often have brown bag lunchtime presentations at Clearleft. In the Before Times, this would involve a trip to Pret or Itsu to get a lunch order in, which we would then proceed to eat in front of whoever was giving the presentation. Often it’s someone from Clearleft demoing something or playing back a project, but whenever possible we’d rope in other people to swing by and share what they’re up to.

We’ve continued this tradition since making the switch to working remotely. Now the brown bag presentations happen over Zoom. This has two advantages. Firstly, if you don’t want the presenter watching you eat your lunch, you can switch your camera off. Secondly, because the presenter doesn’t have to be in Brighton, there’s no geographical limit on who could present.

Our most recent brown bag was truly excellent. I asked Léonie if she’d be up for it, and she very kindly agreed. As well as giving us a whirlwind tour of how assistive technology works on the web, she then invited us to observe her interacting with websites using a screen reader.

I’ve seen Léonie do this before and it’s always struck me as a very open and vulnerable thing to do. Think about it: the audience has more information than the presenter. We can see the website at the same time as we’re listening to Léonie and her screen reader.

We got to nominate which websites to visit. One of them—a client’s current site that we haven’t yet redesigned—was a textbook example of how important form controls are. There was a form where almost everything was hunky-dory: form fields, labels, it was all fine. But one of the inputs was a combo box. Instead of using a native select with a datalist, this was made with JavaScript. Because it was lacking the requisite ARIA additions to make it accessible, it was pretty much unusable to Léonie.

And that’s why you use the right HTML element wherever possible, kids!

The other site Léonie visited was Clearleft’s own. That was all fine. Léonie demonstrated how she’d form a mental model of a page by getting the screen reader to read out the headings. Interestingly, the nesting of headings on the Clearleft site is technically wrong—there’s a jump from an h1 to an h3—probably a result of the component-driven architecture where you don’t quite know where in the page a heading will appear. But this didn’t seem to be an issue. The fact that headings are being used at all was the more important fact. As Léonie said, there’s a lot of incorrect HTML out there so it’s no wonder that screen readers aren’t necessarily sticklers for nesting.

I’ve said it before and I’ll say it again: if you’re using headings, labelling form fields, and providing alternative text for images, you’re already doing a better job than most websites.

Headings weren’t the only way that Léonie got a feel for the page architecture. Landmark roles—like header and nav—really helped too. Inside the nav element, she also heard how many items there were. That’s because the navigation was marked up as a list: “List: six items.”

And that reminded me of the Webkit issue. On Webkit browsers like Safari, the list on the Clearleft site would not be announced as a list. That’s because the lists’s bullets have been removed using CSS.

Now this isn’t the only time that screen readers pay attention to styling. If you use display: none to hide an element from sight, it will also be unavailable to screen reader users. Makes sense. But removing the semantic meaning of lists based on CSS? That seems a bit much.

There are good reasons for it though. Here’s a thread from James Craig on where this decision came from (James, by the way, is an absolute unsung hero of accessibility). It turns out that developers went overboard with lists a while back and that’s why we can’t have nice things. In over-compensating from divitis, developers ended up creating listitis, marking up anything vaguely list-like as an unordered list with styling adjusted. That was very annoying for screen reader users trying to figure out what was actually a list.

And James also asks:

If a sighted user doesn’t need to know it’s a list, why would a screen reader user need to know or want to know? Stated another way, if the visible list markers (bullets, image markers, etc.) are deemed by the designers to be visually burdensome or redundant for sighted users, why burden screen reader users with those semantics?

That’s a fair point, but the thing is …bullets maketh not the list. There are many ways of styling something that is genuinely a list that doesn’t involve bullets or image markers. White space, borders, keylines—these can all indicate visually that something is a list of items.

If you look at, say, the tunes page on The Session, you can see that there are numerous lists—newest tunes, latest comments, etc. In this case, as a sighted visitor, you would be at an advantage over a screen reader user in that you can, at a glance, see that there’s a list of five items here, a list of ten items there.

So I’m not disagreeing with the thinking behind the Webkit decision, but I do think the heuristics probably aren’t going to be quite good enough to make the call on whether something is truly a list or not.

Still, while I used to be kind of upset about the Webkit behaviour, I’ve become more equanimous about it over time. There are two reasons for this.

Firstly, there’s something that Eric said:

We have come so far to agree that websites don’t need to look the same in every browser mostly due to bugs in their rendering engines or preferences of the user.

I think the same is true for screen readers and other assistive technology: Websites don’t need to sound the same in every screen reader.

That’s a really good point. If we agree that “pixel perfection” isn’t attainable—or desirable—in a fluid, user-centred medium like the web, why demand the aural equivalent?

The second reason why I’m not storming the barricades about this is something that James said:

Of course, heuristics are imperfect, so authors have the ability to explicitly override the heuristically determined role by adding role="list”.

That means more work for me as a developer, and that’s …absolutely fine. If I can take something that might be a problem for a user, and turn into something that’s a problem for me, I’ll choose to make it my problem every time.

I don’t have to petition Webkit to change their stance or update their heuristics. If I feel strongly that a list styled without bullets should still be announced as a list, I can specificy that in the markup.

It does feel very redundant to write ul role="list”. The whole point of having HTML elements with built-in semantics is that you don’t need to add any ARIA roles. But we did it for a while when new structural elements were introduced in HTML5—main role="main", nav role="navigation", etc. So I’m okay with a little bit of redundancy. I think the important thing is that you really stop and think about whether something should be announced as a list or not, regardless of styling. There isn’t a one-size-fits-all answer (hence why it’s nigh-on impossible to get the heuristics right). Each list needs to be marked up on a case-by-case basis.

And I wouldn’t advise spending too much time thinking about this either. There are other, more important areas to consider. Like I said, headings, forms, and images really matter. I’d prioritise those elements above thinking about lists. And it’s worth pointing out that Webkit doesn’t remove all semantic meaning from styled lists—it updates the role value from list to group. That seems sensible to me.

In the case of that page on The Session, I don’t think I’m guilty of listitis. Yes, there are seven lists on that page (two for navigation, five for content) but I’m reasonably confident that they all look like lists even without bullets or markers. So I’ve added role="list" to some ul elements.

As with so many things related to accessibility—and the web in general—this is a situation where the only answer I can confidentally come up with is …it depends.

Clean advertising

Imagine if you were told that fossil fuels were the only way of extracting energy. It would be an absurd claim. Not only are other energy sources available—solar, wind, geothermal, nuclear—fossil fuels aren’t even the most effecient source of energy. To say that you can’t have energy without burning fossil fuels would be pitifully incorrect.

And yet when it comes to online advertising, we seem to have meekly accepted that you can’t have effective advertising without invasive tracking. But nothing could be further from the truth. Invasive tracking is to online advertising as fossil fuels are to energy production—an outmoded inefficient means of getting substandard results.

Before the onslaught of third party cookies and scripts, online advertising was contextual. If I searched for property insurance, I was likely to see an advertisement for property insurance. If I was reading an article about pet food, I was likely to be served an advertisement for pet food.

Simply put, contextual advertising ensured that the advertising that accompanied content could be relevant and timely. There was no big mystery about it: advertisers just needed to know what the content was about and they could serve up the appropriate advertisement. Nice and straightforward.

Too straightforward.

What if, instead of matching the advertisement to the content, we could match the advertisement to the person? Regardless of what they were searching for or reading, they’d be served advertisements that were relevant to them not just in that moment, but relevant to their lifestyles, thoughts and beliefs? Of course that would require building up dossiers of information about each person so that their profiles could be targeted and constantly updated. That’s where cross-site tracking comes in, with third-party cookies and scripts.

This is behavioural advertising. It has all but elimated contextual advertising. It has become so pervasive that online advertising and behavioural advertising have become synonymous. Contextual advertising is seen as laughably primitive compared with the clairvoyant powers of behavioural advertising.

But there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

First of all, it relies on mind-reading powers by the advertising brokers—Facebook, Google, and the other middlemen of ad tech. For all the apocryphal folk tales of spooky second-guessing in online advertising, it mostly remains rubbish.

Forget privacy: you’re terrible at targeting anyway:

None of this works. They are still trying to sell me car insurance for my subway ride.

Have you actually paid attention to what advertisements you’re served? Maciej did:

I saw a lot of ads for GEICO, a brand of car insurance that I already own.

I saw multiple ads for Red Lobster, a seafood restaurant chain in America. Red Lobster doesn’t have any branches in San Francisco, where I live.

Finally, I saw a ton of ads for Zipcar, which is a car sharing service. These really pissed me off, not because I have a problem with Zipcar, but because they showed me the algorithm wasn’t even trying. It’s one thing to get the targeting wrong, but the ad engine can’t even decide if I have a car or not! You just showed me five ads for car insurance.

And yet in the twisted logic of ad tech, all of this would be seen as evidence that they need to gather even more data with even more invasive tracking and surveillance.

It turns out that bizarre logic is at the very heart of behavioural advertising. I highly recommend reading the in-depth report from The Correspondent called The new dot com bubble is here: it’s called online advertising:

It’s about a market of a quarter of a trillion dollars governed by irrationality.

The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tracker-driven behavioural advertising is good for the middlemen doing the tracking. Facebook and Google are two of the biggest players here. But that doesn’t mean that their business models need to be permanently anchored to surveillance. The very monopolies that make them kings of behavioural advertising—the biggest social network and the biggest search engine—would also make them titans of contextual advertising. They could pivot from an invasive behavioural model of advertising to a privacy-respecting contextual advertising model.

The incumbents will almost certainly resist changing something so fundamental. It would be like expecting an energy company to change their focus from fossil fuels to renewables. It won’t happen quickly. But I think that it may eventually happen …if we demand it.

In the meantime, we can all play our part. Just as we can do our bit for the environment at an individual level by sorting our recycling and making green choices in our day to day lives, we can all do our bit for the web too.

The least we can do is block third-party cookies. Some browsers are now doing this by default. That’s good.

Blocking third-party JavaScript is a bit trickier. That requires a browser extension. Most of these extensions to block third-party tracking are called ad blockers. That’s a shame. The issue is not with advertising. The issue is with tracking.

Alas, because this software is labelled under ad blocking, it has led to the ludicrous situation of an ethical argument being made to allow surveillance and tracking! It goes like this: websites need advertising to survive; if you block the ads, then you are denying these sites revenue. That argument would make sense if we were talking about contextual advertising. But it makes no sense when it comes to behavioural advertising …unless you genuinely believe that online advertising has to be behavioural, which means that online advertising has to track you to be effective. Such a belief would be completely wrong. But that doesn’t stop it being widely held.

To argue that there is a moral argument against blocking trackers is ridiculous. If anything, there’s a moral argument to be made for installing anti-tracking software for yourself, your friends, and your family. Otherwise we are collectively giving up our privacy for a business model that doesn’t even work.

It’s a shame that advertisers will lose out if tracking-blocking software prevents their ads from loading. But that’s only going to happen in the case of behavioural advertising. Contextual advertising won’t be blocked. Contextual advertising is also more lightweight than behavioural advertising. Contextual advertising is far less creepy than behavioural advertising. And crucially, contextual advertising works.

That shouldn’t be a controversial claim: the idea that people would be interested in adverts that are related to the content they’re currently looking at. The greatest trick the ad tech industry has pulled is convincing the world that contextual relevance is somehow less effective than some secret algorithm fed with all our data that’s supposed to be able to practically read our minds and know us better than we know ourselves.

Y’know, if this mind-control ray really could give me timely relevant adverts, I might possibly consider paying the price with my privacy. But as it is, YouTube still hasn’t figured out that I’m not interested in Top Gear or football.

The next time someone is talking about the necessity of advertising on the web as a business model, ask for details. Do they mean contextual or behavioural advertising? They’ll probably laugh at you and say that behavioural advertising is the only thing that works. They’ll be wrong.

I know it’s hard to imagine a future without tracker-driven behavioural advertising. But there are no good business reasons for it to continue. It was once hard to imagine a future without oil or coal. But through collective action, legislation, and smart business decisions, we can make a cleaner future.

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

The Correct Material

I’ve been watching The Right Stuff on Disney Plus. It’s a modern remake of the ’80s film of the ’70s Tom Wolfe book of ’60s events.

It’s okay. The main challenge, as a viewer, is keeping track of which of the seven homogenous white guys is which. It’s like Merry, Pippin, Ant, Dec, and then some.

It’s kind of fun watching it after watching For All Mankind which has some of the same characters following a different counterfactual history.

The story being told is interesting enough (although Tom has pointed out that removing the Chuck Yeager angle really diminishes the narrative). But ultimately the tension is manufactured around a single event—the launch of Freedom 7—that was very much in the shadow of Gargarin’s historic Vostok 1 flight.

There are juicier stories to be told, but those stories come from Russia.

Some of these stories have been told in film. The Spacewalker told the amazing story of Alexei Leonov’s mission, though it messes with the truth about what happened with the landing and recovery—a real shame, considering that the true story is remarkable enough.

Imagine an alternative to The Right Stuff that relayed the drama of Soyuz 1—it’s got everything: friendship, rivalries, politics, tragedy…

I’d watch the heck out of that.

Caching and storing

When I was speaking at conferences last year about service workers, I’d introduce the Cache API. I wanted some way of explaining the difference between caching and other kinds of storage.

The way I explained was that, while you might store stuff for a long time, you’d only cache stuff that you knew you were going to need again. So according to that definition, when you make a backup of your hard drive, that’s not caching …becuase you hope you’ll never need to use the backup.

But that explanation never sat well with me. Then more recently, I was chatting with Amber about caching. Once again, we trying to define the difference between, say, the Cache API and things like LocalStorage and IndexedDB. At some point, we realised the fundamental difference: caches are for copies.

Think about it. If you store something in LocalStorage or IndexedDB, that’s the canonical home for that data. But anything you put into a cache must be a copy of something that exists elsewhere. That’s true of the Cache API, the browser cache, and caches on the server. An item in one of those caches is never the original—it’s always a copy of something that has a canonical home elsewhere.

By that definition, backing up your hard drive definitely is caching.

Anyway, I was glad to finally have a working definition to differentiate between caching and storing.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

aria-live

I wrote a little something recently about using ARIA attributes as selectors in CSS. For me, one of the advantages is that because ARIA attributes are generally added via JavaScript, the corresponding CSS rules won’t kick in if something goes wrong with the JavaScript:

Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML).

But there’s one instance where I actually put the ARIA attribute directly in the HTML that gets sent from the server: aria-live.

If you’re not familiar with it, aria-live is extremely useful if you’ve got any dynamic updates on your page—via Ajax, for example. Let’s say you’ve got a bit of your site where filtered results will show up. Slap an aria-live attribute on there with a value of “polite”:

<div aria-live="polite">
...dynamic content gets inserted here
</div>

You could instead provide a value of “assertive”, but you almost certainly don’t want to do that—it can be quite rude.

Anyway, on the face it, this looks like exactly the kind of ARIA attribute that should be added with JavaScript. After all, if there’s no JavaScript, there’ll be no dynamic updates.

But I picked up a handy lesson from Ire’s excellent post on using aria-live:

Assistive technology will initially scan the document for instances of the aria-live attribute and keep track of elements that include it. This means that, if we want to notify users of a change within an element, we need to include the attribute in the original markup.

Good to know!

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

ARIA in CSS

Sara tweeted something recently that resonated with me:

Also, Pro Tip: Using ARIA attributes as CSS hooks ensures your component will only look (and/or function) properly if said attributes are used in the HTML, which, in turn, ensures that they will always be added (otherwise, the component will obv. be broken)

Yes! I didn’t mention it when I wrote about accessible interactions but this is my preferred way of hooking up CSS and JavaScript interactions. Here’s old Codepen where you can see it in action:

[aria-hidden='true'] {
  display: none;
}

In order for the functionality to work for everyone—screen reader users or not—I have to make sure that I’m toggling the value of aria-hidden in my JavaScript.

There’s another advantage to this technique. Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML). If something goes wrong with the JavaScript, the aria-hidden value isn’t set to “true”, which means that the CSS never kicks in. So the default state is for content to be displayed. There’s no assumption that the JavaScript has to work in order for the CSS to make sense.

It’s almost as though accessibility and progressive enhancement are connected somehow…

Accessible interactions

Accessibility on the web is easy. Accessibility on the web is also hard.

I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:

  • Unlabelled links and buttons
  • No image descriptions
  • Poor use of headings
  • Inaccessible web forms
  • Auto-playing audio and video

None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).

Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.

This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.

When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.

The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:

  • accordions,
  • menus (including mega menu monstrosities),
  • modal dialogs,
  • tabs.

In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.

The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount divs, spans, and most other elements immediately; use a trigger that’s focusable and interactive by default).

As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.

That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?

Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?

If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.

<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.

<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>

Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.

The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.

So I can answer my first two questions:

  • “Should the trigger be a link or button?” and
  • “Should the target be automatically focused?”

…by answering a different question:

  • “How far away from each other are the trigger and the target?”

It’s not a hard and fast rule, but it helps me out when I’m unsure.

At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener(), some tabindex action, and maybe a focus() method.

Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden attribute on the target.

When the target isn’t being shown:

  • the trigger has aria-expanded="false",
  • the target has aria-hidden="true".

When the target is shown:

  • the trigger has aria-expanded="true",
  • the target has aria-hidden="false".

There’s also an aria-controls attribute that allows me to explicitly associate the trigger and the target:

<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But don’t assume that’s going to help you. As Heydon put it, aria-controls is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.

Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.

At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.

I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)

Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.

And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?

Standards processing

I’ve been like a dog with a bone the way I’ve been pushing for a declarative option for the Web Share API in the shape of button type=“share”. It’s been an interesting window into the world of web standards.

The story so far…

That’s the situation currently. The general consensus seems to be that it’s probably too soon to be talking about implementation at this stage—the Web Share API itself is still pretty new—but gathering data to inform future work is good.

In planning for the next TPAC meeting (the big web standards gathering), Marcos summarised the situation like this:

Not blocking: but a proposal was made by @adactio to come up with a declarative solution, but at least two implementers have said that now is not the appropriate time to add such a thing to the spec (we need more implementation experience + and also to see how devs use the API) - but it would be great to see a proposal incubated at the WICG.

Now this where things can get a little confusing because it used to be that if you wanted to incubate a proposal, you’d have to do on Discourse, which is a steaming pile of crap that requires JavaScript in order to put text on a screen. But Šime pointed out that proposals can now be submitted on Github.

So that’s where I’ve submitted my proposal, linking through to the explainer document.

Like I said, I’m not expecting anything to happen anytime soon, but it would be really good to gather as much data as possible around existing usage of the Web Share API. If you’re using it, or you know anyone who’s using it, please, please, please take a moment to provide a quick description. And if you could help spread the word to get that issue in front of as many devs as possible, I’d be very grateful.

(Many thanks to everyone who’s already contributed to that issue—much appreciated!)

Continuous partial browser support

Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.

That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.

Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path if you want to support Safari. It’s been like that for seven years and counting.

Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.

Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.

The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:

  • Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
  • Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.

I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.

That said…

You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.

And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports in the case of CSS). Try to use the feature as an enhancement rather than a dependency.

If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

You can also do it for well-established features like querySelector, addEventListener, and geolocation. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.

The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.