Journal

2730 sparkline

Monday, November 30th, 2020

Clean advertising

Imagine if you were told that fossil fuels were the only way of extracting energy. It would be an absurd claim. Not only are other energy sources available—solar, wind, geothermal, nuclear—fossil fuels aren’t even the most effecient source of energy. To say that you can’t have energy without burning fossil fuels would be pitifully incorrect.

And yet when it comes to online advertising, we seem to have meekly accepted that you can’t have effective advertising without invasive tracking. But nothing could be further from the truth. Invasive tracking is to online advertising as fossil fuels are to energy production—an outmoded inefficient means of getting substandard results.

Before the onslaught of third party cookies and scripts, online advertising was contextual. If I searched for property insurance, I was likely to see an advertisement for property insurance. If I was reading an article about pet food, I was likely to be served an advertisement for pet food.

Simply put, contextual advertising ensured that the advertising that accompanied content could be relevant and timely. There was no big mystery about it: advertisers just needed to know what the content was about and they could serve up the appropriate advertisement. Nice and straightforward.

Too straightforward.

What if, instead of matching the advertisement to the content, we could match the advertisement to the person? Regardless of what they were searching for or reading, they’d be served advertisements that were relevant to them not just in that moment, but relevant to their lifestyles, thoughts and beliefs? Of course that would require building up dossiers of information about each person so that their profiles could be targeted and constantly updated. That’s where cross-site tracking comes in, with third-party cookies and scripts.

This is behavioural advertising. It has all but elimated contextual advertising. It has become so pervasive that online advertising and behavioural advertising have become synonymous. Contextual advertising is seen as laughably primitive compared with the clairvoyant powers of behavioural advertising.

But there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

First of all, it relies on mind-reading powers by the advertising brokers—Facebook, Google, and the other middlemen of ad tech. For all the apocryphal folk tales of spooky second-guessing in online advertising, it mostly remains rubbish.

Forget privacy: you’re terrible at targeting anyway:

None of this works. They are still trying to sell me car insurance for my subway ride.

Have you actually paid attention to what advertisements you’re served? Maciej did:

I saw a lot of ads for GEICO, a brand of car insurance that I already own.

I saw multiple ads for Red Lobster, a seafood restaurant chain in America. Red Lobster doesn’t have any branches in San Francisco, where I live.

Finally, I saw a ton of ads for Zipcar, which is a car sharing service. These really pissed me off, not because I have a problem with Zipcar, but because they showed me the algorithm wasn’t even trying. It’s one thing to get the targeting wrong, but the ad engine can’t even decide if I have a car or not! You just showed me five ads for car insurance.

And yet in the twisted logic of ad tech, all of this would be seen as evidence that they need to gather even more data with even more invasive tracking and surveillance.

It turns out that bizarre logic is at the very heart of behavioural advertising. I highly recommend reading the in-depth report from The Correspondent called The new dot com bubble is here: it’s called online advertising:

It’s about a market of a quarter of a trillion dollars governed by irrationality.

The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tracker-driven behavioural advertising is good for the middlemen doing the tracking. Facebook and Google are two of the biggest players here. But that doesn’t mean that their business models need to be permanently anchored to surveillance. The very monopolies that make them kings of behavioural advertising—the biggest social network and the biggest search engine—would also make them titans of contextual advertising. They could pivot from an invasive behavioural model of advertising to a privacy-respecting contextual advertising model.

The incumbents will almost certainly resist changing something so fundamental. It would be like expecting an energy company to change their focus from fossil fuels to renewables. It won’t happen quickly. But I think that it may eventually happen …if we demand it.

In the meantime, we can all play our part. Just as we can do our bit for the environment at an individual level by sorting our recycling and making green choices in our day to day lives, we can all do our bit for the web too.

The least we can do is block third-party cookies. Some browsers are now doing this by default. That’s good.

Blocking third-party JavaScript is a bit trickier. That requires a browser extension. Most of these extensions to block third-party tracking are called ad blockers. That’s a shame. The issue is not with advertising. The issue is with tracking.

Alas, because this software is labelled under ad blocking, it has led to the ludicrous situation of an ethical argument being made to allow surveillance and tracking! It goes like this: websites need advertising to survive; if you block the ads, then you are denying these sites revenue. That argument would make sense if we were talking about contextual advertising. But it makes no sense when it comes to behavioural advertising …unless you genuinely believe that online advertising has to be behavioural, which means that online advertising has to track you to be effective. Such a belief would be completely wrong. But that doesn’t stop it being widely held.

To argue that there is a moral argument against blocking trackers is ridiculous. If anything, there’s a moral argument to be made for installing anti-tracking software for yourself, your friends, and your family. Otherwise we are collectively giving up our privacy for a business model that doesn’t even work.

It’s a shame that advertisers will lose out if tracking-blocking software prevents their ads from loading. But that’s only going to happen in the case of behavioural advertising. Contextual advertising won’t be blocked. Contextual advertising is also more lightweight than behavioural advertising. Contextual advertising is far less creepy than behavioural advertising. And crucially, contextual advertising works.

That shouldn’t be a controversial claim: the idea that people would be interested in adverts that are related to the content they’re currently looking at. The greatest trick the ad tech industry has pulled is convincing the world that contextual relevance is somehow less effective than some secret algorithm fed with all our data that’s supposed to be able to practically read our minds and know us better than we know ourselves.

Y’know, if this mind-control ray really could give me timely relevant adverts, I might possibly consider paying the price with my privacy. But as it is, YouTube still hasn’t figured out that I’m not interested in Top Gear or football.

The next time someone is talking about the necessity of advertising on the web as a business model, ask for details. Do they mean contextual or behavioural advertising? They’ll probably laugh at you and say that behavioural advertising is the only thing that works. They’ll be wrong.

I know it’s hard to imagine a future without tracker-driven behavioural advertising. But there are no good business reasons for it to continue. It was once hard to imagine a future without oil or coal. But through collective action, legislation, and smart business decisions, we can make a cleaner future.

Tuesday, November 17th, 2020

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

Saturday, November 14th, 2020

The Correct Material

I’ve been watching The Right Stuff on Disney Plus. It’s a modern remake of the ’80s film of the ’70s Tom Wolfe book of ’60s events.

It’s okay. The main challenge, as a viewer, is keeping track of which of the seven homogenous white guys is which. It’s like Merry, Pippin, Ant, Dec, and then some.

It’s kind of fun watching it after watching For All Mankind which has some of the same characters following a different counterfactual history.

The story being told is interesting enough (although Tom has pointed out that removing the Chuck Yeager angle really diminishes the narrative). But ultimately the tension is manufactured around a single event—the launch of Freedom 7—that was very much in the shadow of Gargarin’s historic Vostok 1 flight.

There are juicier stories to be told, but those stories come from Russia.

Some of these stories have been told in film. The Spacewalker told the amazing story of Alexei Leonov’s mission, though it messes with the truth about what happened with the landing and recovery—a real shame, considering that the true story is remarkable enough.

Imagine an alternative to The Right Stuff that relayed the drama of Soyuz 1—it’s got everything: friendship, rivalries, politics, tragedy…

I’d watch the heck out of that.

Thursday, November 12th, 2020

Caching and storing

When I was speaking at conferences last year about service workers, I’d introduce the Cache API. I wanted some way of explaining the difference between caching and other kinds of storage.

The way I explained was that, while you might store stuff for a long time, you’d only cache stuff that you knew you were going to need again. So according to that definition, when you make a backup of your hard drive, that’s not caching …becuase you hope you’ll never need to use the backup.

But that explanation never sat well with me. Then more recently, I was chatting with Amber about caching. Once again, we trying to define the difference between, say, the Cache API and things like LocalStorage and IndexedDB. At some point, we realised the fundamental difference: caches are for copies.

Think about it. If you store something in LocalStorage or IndexedDB, that’s the canonical home for that data. But anything you put into a cache must be a copy of something that exists elsewhere. That’s true of the Cache API, the browser cache, and caches on the server. An item in one of those caches is never the original—it’s always a copy of something that has a canonical home elsewhere.

By that definition, backing up your hard drive definitely is caching.

Anyway, I was glad to finally have a working definition to differentiate between caching and storing.

Wednesday, November 11th, 2020

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Tuesday, November 10th, 2020

aria-live

I wrote a little something recently about using ARIA attributes as selectors in CSS. For me, one of the advantages is that because ARIA attributes are generally added via JavaScript, the corresponding CSS rules won’t kick in if something goes wrong with the JavaScript:

Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML).

But there’s one instance where I actually put the ARIA attribute directly in the HTML that gets sent from the server: aria-live.

If you’re not familiar with it, aria-live is extremely useful if you’ve got any dynamic updates on your page—via Ajax, for example. Let’s say you’ve got a bit of your site where filtered results will show up. Slap an aria-live attribute on there with a value of “polite”:

<div aria-live="polite">
...dynamic content gets inserted here
</div>

You could instead provide a value of “assertive”, but you almost certainly don’t want to do that—it can be quite rude.

Anyway, on the face it, this looks like exactly the kind of ARIA attribute that should be added with JavaScript. After all, if there’s no JavaScript, there’ll be no dynamic updates.

But I picked up a handy lesson from Ire’s excellent post on using aria-live:

Assistive technology will initially scan the document for instances of the aria-live attribute and keep track of elements that include it. This means that, if we want to notify users of a change within an element, we need to include the attribute in the original markup.

Good to know!

Monday, November 9th, 2020

Bookshop

Back at the start of the (first) lockdown, I wrote about using my website as an outlet:

While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

Last week was eventful and stressful. For everyone. I found myself once again taking refuge in my website, tinkering with its inner workings in the way that someone else would potter about in their shed or take to their garage to strip down the engine of some automotive device.

Colly drew my attention to Bookshop.org, newly launched in the UK. It’s an umbrella website for independent bookshops to sell through. It’s also got an affiliate scheme, much like Amazon. I set up a Bookshop page for myself.

I’ve been tracking the books I’m reading for the past three years here on my own website. I set about reproducing that list on Bookshop.

It was exactly the kind of not-exactly-mindless but definitely-not-challenging task that was perfect for the state of my brain last week. Search for a book; find the ISBN number; paste that number into a form. It’s the kind of task that a real programmer would immediately set about automating but one that I embraced as a kind of menial task to keep me occupied.

I wasn’t able to get a one-to-one match between the list on my site and my reading list on Bookshop. Some titles aren’t available in the online catalogue. For example, the book I’m reading right now—A Paradise Built in Hell by Rebecca Solnit—is nowhere to be found, which seems like an odd omission.

But most of the books I’ve read are there on Bookshop.org, complete with pretty book covers. Then I decided to reverse the process of my menial task. I took all of the ISBN numbers from Bookshop and add them as machine tags to my reading notes here on my own website. Book cover images on Bookshop have predictable URLs that use the ISBN number (well, technically the EAN number, or ISBN-13, but let’s not go down a 927 rabbit hole here). So now I’m using that metadata to pull in images from Bookshop.org to illustrate my reading notes here on adactio.com.

I’m linking to the corresponding book on Bookshop.org using this URL structure:

https://uk.bookshop.org/a/{{ affiliate code }}/{{ ISBN number }}

I realised that I could also link to the corresponding entry on Open Library using this URL structure:

https://openlibrary.org/isbn/{{ ISBN number }}

Here, for example, is my note for The Raven Tower by Ann Leckie. That entry has a tag:

book:ean=9780356506999

With that information I can illustrate my note with this image:

https://images-eu.bookshop.org/product-images/images/9780356506999.jpg

I’m linking off to this URL on Bookshop.org:

https://uk.bookshop.org/a/980/9780356506999

And this URL on Open Library:

https://openlibrary.org/isbn/9780356506999

The end result is that my reading list now has more links and pretty pictures.

Oh, I also set up a couple of shorter lists on Bookshop.org:

The books listed in those are drawn from my end of the year round-ups when I try to pick one favourite non-fiction book and one favourite work of fiction (almost always speculative fiction). The books in those two lists are the ones that get two hearty thumbs up from me. If you click through to buy one of them, the price might not be as cheap as on Amazon, but you’ll be supporting an independent bookshop.

Wednesday, October 28th, 2020

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Tuesday, October 27th, 2020

ARIA in CSS

Sara tweeted something recently that resonated with me:

Also, Pro Tip: Using ARIA attributes as CSS hooks ensures your component will only look (and/or function) properly if said attributes are used in the HTML, which, in turn, ensures that they will always be added (otherwise, the component will obv. be broken)

Yes! I didn’t mention it when I wrote about accessible interactions but this is my preferred way of hooking up CSS and JavaScript interactions. Here’s old Codepen where you can see it in action:

[aria-hidden='true'] {
  display: none;
}

In order for the functionality to work for everyone—screen reader users or not—I have to make sure that I’m toggling the value of aria-hidden in my JavaScript.

There’s another advantage to this technique. Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML). If something goes wrong with the JavaScript, the aria-hidden value isn’t set to “true”, which means that the CSS never kicks in. So the default state is for content to be displayed. There’s no assumption that the JavaScript has to work in order for the CSS to make sense.

It’s almost as though accessibility and progressive enhancement are connected somehow…

Wednesday, October 21st, 2020

Accessible interactions

Accessibility on the web is easy. Accessibility on the web is also hard.

I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:

  • Unlabelled links and buttons
  • No image descriptions
  • Poor use of headings
  • Inaccessible web forms
  • Auto-playing audio and video

None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).

Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.

This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.

When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.

The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:

  • accordions,
  • menus (including mega menu monstrosities),
  • modal dialogs,
  • tabs.

In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.

The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount divs, spans, and most other elements immediately; use a trigger that’s focusable and interactive by default).

As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.

That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?

Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?

If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.

<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.

<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>

Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.

The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.

So I can answer my first two questions:

  • “Should the trigger be a link or button?” and
  • “Should the target be automatically focused?”

…by answering a different question:

  • “How far away from each other are the trigger and the target?”

It’s not a hard and fast rule, but it helps me out when I’m unsure.

At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener(), some tabindex action, and maybe a focus() method.

Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden attribute on the target.

When the target isn’t being shown:

  • the trigger has aria-expanded="false",
  • the target has aria-hidden="true".

When the target is shown:

  • the trigger has aria-expanded="true",
  • the target has aria-hidden="false".

There’s also an aria-controls attribute that allows me to explicitly associate the trigger and the target:

<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But don’t assume that’s going to help you. As Heydon put it, aria-controls is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.

Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.

At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.

I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)

Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.

And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?