Journal tags: access

59

sparkline

The principle of most availability

I’ve been thinking some more about the technical experience of booking a vaccination apointment and how much joy it brought me.

I’ve written before about how I’ve got a blind spot for the web so it’s no surprise that I was praising the use of a well marked-up form, styled clearly, and unencumbered by unnecessary JavaScript. But other technologies were in play too: Short Message Service (SMS) and email.

All of those technologies are platform-agnostic.

No matter what operating system I’m using, or what email software I’ve chosen, email works. It gets more complicated when you introduce HTML email. My response to that is the same as the old joke; you know the one: “Doctor, it hurts when I do this.” (“Well, don’t do that.”)

No matter what operating system my phone is using, SMS works. It gets more complicated when you introduce read receipts, memoji, or other additions. See my response to HTML email.

Then there’s the web. No matter what operating system I’m using on a device that could be a phone or a tablet or a laptop or desktop tower, and no matter what browser I’ve chosen to use, the World Wide Web works.

I originally said:

It feels like the principle of least power in action.

But another way of rephrasing “least power” is “most availability.” Technologies that are old, simple, and boring tend to be more widely available.

I remember when software used to come packaged in boxes and displayed on shelves. The packaging always had a list on the side. It looked like the nutritional information on a food product, but this was a list of “system requirements”: operating system, graphics card, sound card, CPU. I never liked the idea of system requirements. It felt so …exclusionary. And for me, the promise of technology was liberation and freedom to act on my own terms.

Hence my soft spot for the boring and basic technologies like email, SMS, and yes, web pages. The difference with web pages is that you can choose to layer added extras on top. As long as the fundamental functionality is using universally-supported technology, you’re free to enhance with all the latest CSS and JavaScript. If any of it fails, that’s okay: it falls back to a nice solid base.

Alas, many developers don’t build with this mindset. I mean, I understand why: it means thinking about users with the most boring, least powerful technology. It’s simpler and more exciting to assume that everyone’s got a shared baseline of newer technology. But by doing that, you’re missing out on one of the web’s superpowers: that something served up at the same URL with the same underlying code can simultaneously serve people with older technology and also provide a whizz-bang experience to people with the latest and greatest technology.

Anyway, I’ve been thinking about the kind of communication technologies that are as universal as email, SMS, and the web.

QR codes are kind of heading in that direction, although I still have qualms because of their proprietary history. But there’s something nice and lo-fi about them. They’re like print stylesheets in reverse (and I love print stylesheets). A funky little bridge between the physical and the digital. I just wish they weren’t so opaque: you never know if scanning that QR code will actually take you to the promised resource, or if you’re about to rickroll yourself.

Telephone numbers kind of fall into the same category as SMS, but with the added option of voice. I’ve always found the prospect of doing something with, say, Twilio’s API more interesting than building something inside a walled garden like Facebook Messenger or Alexa.

I know very little about chat apps or voice apps, but I don’t think there’s a cross-platform format that works with different products, right? I imagine it’s like the situation with native apps which require a different codebase for each app store and operating system. And so there’s a constant stream of technologies that try to fulfil the dream of writing once and running everywhere: React Native, Flutter.

They’re trying to solve a very clear and obvious problem: writing the same app more than once is really wasteful. But that’s the nature of the game when it comes to runtime-specific apps. The only alternative is to either deliberately limit your audience …or apply the principle of least power/most availability.

The wastefulness of having to write the same app for multiple platforms isn’t the only thing that puts me off making native apps. The exclusivity works in two directions. There’s the exclusive nature of the runtime that requires a bespoke codebase. There’s also the exclusive nature of the app store. It feels like a return to shelves of packaged software with strict system requirements. You can’t just walk in and put your software on the shelf. That’s the shopkeeper’s job.

There is no shopkeeper for the World Wide Web.

Accessibility on the Clearleft podcast

We’re halfway through the second season of the Clearleft podcast already!

The latest episode is on a topic close to my heart: accessibility. But I get out of the way early on and let much smarter folks do the talking. In this case, it’s a power trio of Laura, Cassie, and Léonie. It even features a screen-reader demo by Léonie.

I edited the episode pretty tightly so it comes in at just under 15 minutes. I’m sure you can find 15 minutes of your busy day to set aside for a listen.

If you like what you hear, please spread the word about the Clearleft podcast and pop that RSS feed into your podcast player of choice.

Lists

We often have brown bag lunchtime presentations at Clearleft. In the Before Times, this would involve a trip to Pret or Itsu to get a lunch order in, which we would then proceed to eat in front of whoever was giving the presentation. Often it’s someone from Clearleft demoing something or playing back a project, but whenever possible we’d rope in other people to swing by and share what they’re up to.

We’ve continued this tradition since making the switch to working remotely. Now the brown bag presentations happen over Zoom. This has two advantages. Firstly, if you don’t want the presenter watching you eat your lunch, you can switch your camera off. Secondly, because the presenter doesn’t have to be in Brighton, there’s no geographical limit on who could present.

Our most recent brown bag was truly excellent. I asked Léonie if she’d be up for it, and she very kindly agreed. As well as giving us a whirlwind tour of how assistive technology works on the web, she then invited us to observe her interacting with websites using a screen reader.

I’ve seen Léonie do this before and it’s always struck me as a very open and vulnerable thing to do. Think about it: the audience has more information than the presenter. We can see the website at the same time as we’re listening to Léonie and her screen reader.

We got to nominate which websites to visit. One of them—a client’s current site that we haven’t yet redesigned—was a textbook example of how important form controls are. There was a form where almost everything was hunky-dory: form fields, labels, it was all fine. But one of the inputs was a combo box. Instead of using a native select with a datalist, this was made with JavaScript. Because it was lacking the requisite ARIA additions to make it accessible, it was pretty much unusable to Léonie.

And that’s why you use the right HTML element wherever possible, kids!

The other site Léonie visited was Clearleft’s own. That was all fine. Léonie demonstrated how she’d form a mental model of a page by getting the screen reader to read out the headings. Interestingly, the nesting of headings on the Clearleft site is technically wrong—there’s a jump from an h1 to an h3—probably a result of the component-driven architecture where you don’t quite know where in the page a heading will appear. But this didn’t seem to be an issue. The fact that headings are being used at all was the more important fact. As Léonie said, there’s a lot of incorrect HTML out there so it’s no wonder that screen readers aren’t necessarily sticklers for nesting.

I’ve said it before and I’ll say it again: if you’re using headings, labelling form fields, and providing alternative text for images, you’re already doing a better job than most websites.

Headings weren’t the only way that Léonie got a feel for the page architecture. Landmark roles—like header and nav—really helped too. Inside the nav element, she also heard how many items there were. That’s because the navigation was marked up as a list: “List: six items.”

And that reminded me of the Webkit issue. On Webkit browsers like Safari, the list on the Clearleft site would not be announced as a list. That’s because the lists’s bullets have been removed using CSS.

Now this isn’t the only time that screen readers pay attention to styling. If you use display: none to hide an element from sight, it will also be unavailable to screen reader users. Makes sense. But removing the semantic meaning of lists based on CSS? That seems a bit much.

There are good reasons for it though. Here’s a thread from James Craig on where this decision came from (James, by the way, is an absolute unsung hero of accessibility). It turns out that developers went overboard with lists a while back and that’s why we can’t have nice things. In over-compensating from divitis, developers ended up creating listitis, marking up anything vaguely list-like as an unordered list with styling adjusted. That was very annoying for screen reader users trying to figure out what was actually a list.

And James also asks:

If a sighted user doesn’t need to know it’s a list, why would a screen reader user need to know or want to know? Stated another way, if the visible list markers (bullets, image markers, etc.) are deemed by the designers to be visually burdensome or redundant for sighted users, why burden screen reader users with those semantics?

That’s a fair point, but the thing is …bullets maketh not the list. There are many ways of styling something that is genuinely a list that doesn’t involve bullets or image markers. White space, borders, keylines—these can all indicate visually that something is a list of items.

If you look at, say, the tunes page on The Session, you can see that there are numerous lists—newest tunes, latest comments, etc. In this case, as a sighted visitor, you would be at an advantage over a screen reader user in that you can, at a glance, see that there’s a list of five items here, a list of ten items there.

So I’m not disagreeing with the thinking behind the Webkit decision, but I do think the heuristics probably aren’t going to be quite good enough to make the call on whether something is truly a list or not.

Still, while I used to be kind of upset about the Webkit behaviour, I’ve become more equanimous about it over time. There are two reasons for this.

Firstly, there’s something that Eric said:

We have come so far to agree that websites don’t need to look the same in every browser mostly due to bugs in their rendering engines or preferences of the user.

I think the same is true for screen readers and other assistive technology: Websites don’t need to sound the same in every screen reader.

That’s a really good point. If we agree that “pixel perfection” isn’t attainable—or desirable—in a fluid, user-centred medium like the web, why demand the aural equivalent?

The second reason why I’m not storming the barricades about this is something that James said:

Of course, heuristics are imperfect, so authors have the ability to explicitly override the heuristically determined role by adding role="list”.

That means more work for me as a developer, and that’s …absolutely fine. If I can take something that might be a problem for a user, and turn into something that’s a problem for me, I’ll choose to make it my problem every time.

I don’t have to petition Webkit to change their stance or update their heuristics. If I feel strongly that a list styled without bullets should still be announced as a list, I can specificy that in the markup.

It does feel very redundant to write ul role="list”. The whole point of having HTML elements with built-in semantics is that you don’t need to add any ARIA roles. But we did it for a while when new structural elements were introduced in HTML5—main role="main", nav role="navigation", etc. So I’m okay with a little bit of redundancy. I think the important thing is that you really stop and think about whether something should be announced as a list or not, regardless of styling. There isn’t a one-size-fits-all answer (hence why it’s nigh-on impossible to get the heuristics right). Each list needs to be marked up on a case-by-case basis.

And I wouldn’t advise spending too much time thinking about this either. There are other, more important areas to consider. Like I said, headings, forms, and images really matter. I’d prioritise those elements above thinking about lists. And it’s worth pointing out that Webkit doesn’t remove all semantic meaning from styled lists—it updates the role value from list to group. That seems sensible to me.

In the case of that page on The Session, I don’t think I’m guilty of listitis. Yes, there are seven lists on that page (two for navigation, five for content) but I’m reasonably confident that they all look like lists even without bullets or markers. So I’ve added role="list" to some ul elements.

As with so many things related to accessibility—and the web in general—this is a situation where the only answer I can confidentally come up with is …it depends.

aria-live

I wrote a little something recently about using ARIA attributes as selectors in CSS. For me, one of the advantages is that because ARIA attributes are generally added via JavaScript, the corresponding CSS rules won’t kick in if something goes wrong with the JavaScript:

Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML).

But there’s one instance where I actually put the ARIA attribute directly in the HTML that gets sent from the server: aria-live.

If you’re not familiar with it, aria-live is extremely useful if you’ve got any dynamic updates on your page—via Ajax, for example. Let’s say you’ve got a bit of your site where filtered results will show up. Slap an aria-live attribute on there with a value of “polite”:

<div aria-live="polite">
...dynamic content gets inserted here
</div>

You could instead provide a value of “assertive”, but you almost certainly don’t want to do that—it can be quite rude.

Anyway, on the face it, this looks like exactly the kind of ARIA attribute that should be added with JavaScript. After all, if there’s no JavaScript, there’ll be no dynamic updates.

But I picked up a handy lesson from Ire’s excellent post on using aria-live:

Assistive technology will initially scan the document for instances of the aria-live attribute and keep track of elements that include it. This means that, if we want to notify users of a change within an element, we need to include the attribute in the original markup.

Good to know!

ARIA in CSS

Sara tweeted something recently that resonated with me:

Also, Pro Tip: Using ARIA attributes as CSS hooks ensures your component will only look (and/or function) properly if said attributes are used in the HTML, which, in turn, ensures that they will always be added (otherwise, the component will obv. be broken)

Yes! I didn’t mention it when I wrote about accessible interactions but this is my preferred way of hooking up CSS and JavaScript interactions. Here’s old Codepen where you can see it in action:

[aria-hidden='true'] {
  display: none;
}

In order for the functionality to work for everyone—screen reader users or not—I have to make sure that I’m toggling the value of aria-hidden in my JavaScript.

There’s another advantage to this technique. Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML). If something goes wrong with the JavaScript, the aria-hidden value isn’t set to “true”, which means that the CSS never kicks in. So the default state is for content to be displayed. There’s no assumption that the JavaScript has to work in order for the CSS to make sense.

It’s almost as though accessibility and progressive enhancement are connected somehow…

Accessible interactions

Accessibility on the web is easy. Accessibility on the web is also hard.

I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:

  • Unlabelled links and buttons
  • No image descriptions
  • Poor use of headings
  • Inaccessible web forms
  • Auto-playing audio and video

None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).

Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.

This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.

When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.

The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:

  • accordions,
  • menus (including mega menu monstrosities),
  • modal dialogs,
  • tabs.

In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.

The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount divs, spans, and most other elements immediately; use a trigger that’s focusable and interactive by default).

As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.

That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?

Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?

If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.

<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.

<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>

Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.

The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.

So I can answer my first two questions:

  • “Should the trigger be a link or button?” and
  • “Should the target be automatically focused?”

…by answering a different question:

  • “How far away from each other are the trigger and the target?”

It’s not a hard and fast rule, but it helps me out when I’m unsure.

At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener(), some tabindex action, and maybe a focus() method.

Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden attribute on the target.

When the target isn’t being shown:

  • the trigger has aria-expanded="false",
  • the target has aria-hidden="true".

When the target is shown:

  • the trigger has aria-expanded="true",
  • the target has aria-hidden="false".

There’s also an aria-controls attribute that allows me to explicitly associate the trigger and the target:

<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But don’t assume that’s going to help you. As Heydon put it, aria-controls is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.

Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.

At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.

I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)

Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.

And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?

Accessibility

There’s a new project from Igalia called Open Prioritization:

An experiment in crowd-funding prioritization of new feature implementations for web browsers.

There is some precedent for this. There was a crowd-funding campaign for Yoav Weiss to implement responsive images in Blink a while back. The difference with the Open Prioritization initiative is that it’s also a kind of marketplace for which web standards will get the funding.

Examples include implementing the CSS lab() colour function in Firefox or implementing the :not() pseudo-class in Chrome. There are also some accessibility features like the :focus-visible pseudo-class and the inert HTML attribute.

I must admit, it makes me queasy to see accessibility features go head to head with other web standards. I don’t think a marketplace is the right arena for prioritising accessibility.

I get a similar feeling of discomfort when a presentation or article on accessibility spends a fair bit of time describing the money that can be made by ensuring your website is accessible. I mean, I get it: you’re literally leaving money on the table if you turn people away. But that’s not the reason to ensure your website is accessible. The reason to ensure that your website is accessible is that it’s the right thing to do.

I know that people are uncomfortable with moral arguments, but in this case, I believe it’s important that we keep sight of that.

I understand how it’s useful to have the stats and numbers to hand should you need to convince a sociopath in your organisation, but when numbers are used as the justification, you’re playing the numbers game from then on. You’ll probably have to field questions like “Well, how many screen reader users are visiting our site anyway?” (To which the correct answer is “I don’t know and I don’t care”—even if the number is 1, the website should still be accessible because it’s the right thing to do.)

It reminds of when I was having a discussion with a god-bothering friend of mine about the existence or not of a deity. They made the mistake of trying to argue the case for God based on logic and reason. Those arguments didn’t hold up. But had they made their case based on the real reason for their belief—which is faith—then their position would have been unassailable. I literally couldn’t argue against faith. But instead, by engaging in the rules of logic and reason, they were applying the wrong justification to their stance.

Okay, that’s a bit abstract. How about this…

In a similar vein to talks or articles about accessibility, talks or articles about diversity often begin by pointing out the monetary gain to be had. It’s true. The data shows that companies that are more diverse are also more profitable. But again, that’s not the reason for having a diverse group of people in your company. The reason for having a diverse group of people in your company is that it’s the right thing to do. If you tie the justification for diversity to data, then what happens should the data change? If a new study showed that diverse companies were less profitable, is that a reason to abandon diversity? Absolutely not! If your justification isn’t tied to numbers, then it hardly matters what the numbers say (though it does admitedly feel good to have your stance backed up).

By the way, this is also why I don’t think it’s a good idea to “sell” design systems on the basis of efficiency and cost-savings if the real reason you’re building one is to foster better collaboration and creativity. The fundamental purpose of a design system needs to be shared, not swapped out based on who’s doing the talking.

Anyway, back to accessibility…

A marketplace, to me, feels like exactly the wrong kind of place for accessibility to defend its existence. By its nature, accessibility isn’t a mainstream issue. I mean, think about it: it’s good that accessibility issues affect a minority of people. The fewer, the better. But even if the number of people affected by accessibility were to trend downwards and dwindle, the importance of accessibility should remain unchanged. Accessibility is important regardless of the numbers.

Look, if I make a website for a client, I don’t offer accessibility as a line item with a price tag attached. I build in accessibility by default because it’s the right thing to do. The only way to ensure that accessibility doesn’t get negotiated away is to make sure it’s not up for negotiation.

So that’s why I feel uncomfortable seeing accessibility features in a popularity contest.

I think that markets are great. I think competition is great. But I don’t think it works for everything (like, could you imagine applying marketplace economics to healthcare or prisons? Nightmare!). I concur with Iain M. Banks:

The market is a good example of evolution in action; the try-everything-and-see-what- -works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources.

If Igalia or Mozilla or Google or Apple implement an accessibility feature because they believe that accessibility is important and deserves prioritisation, that’s good. If they implement the same feature just because it received a lot of votes …that doesn’t strike me as a good thing.

I guess it doesn’t matter what the reason is as long as the end result is the same, right? But I suspect that what we’ll see is that the accessibility features up for bidding on Open Prioritization won’t be the winners.

Insecure

Universal access is at the heart of the World Wide Web. It’s also something I value when I’m building anything on the web. Whatever I’m building, I want you to be able to visit using whatever browser or device that you choose.

Just to be clear, that doesn’t mean that you’re going to have the same experience in an old browser as you are in the latest version of Firefox or Chrome. Far from it. Not only is that not feasible, I don’t believe it’s desirable either. But if you’re using an old browser, while you might not get to enjoy the newest CSS or JavaScript, you should still be able to access a website.

Applying the principle of progressive enhancement makes this emminently doable. As long as I build in a layered way, everyone gets access to the barebones HTML, even if they can’t experience newer features. Crucially, as long as I’m doing some feature detection, those newer features don’t harm older browsers.

But there’s one area where maintaining backward compatibility might well have an adverse effect on modern browsers: security.

I don’t just mean whether or not you’re serving sites over HTTPS. Even if you’re using TLS—Transport Layer Security—not all security is created equal.

Take a look at Mozilla’s very handy SSL Configuration Generator. You get to choose from three options:

  1. Modern. Services with clients that support TLS 1.3 and don’t need backward compatibility.
  2. Intermediate. General-purpose servers with a variety of clients, recommended for almost all systems.
  3. Old. Compatible with a number of very old clients, and should be used only as a last resort.

Because I value universal access, I should really go for the “old” setting. That ensures my site is accessible all the way back to Android 2.3 and Safari 1. But if I do that, I will be supporting TLS 1.0. That’s not good. My site is potentially vulnerable.

Alright then, I’ll go for “intermediate”—that’s the recommended level anyway. Now I’m no longer providing TLS 1.0 support. But that means some older browsers can no longer access my site.

This is exactly the situation I found myself in with The Session. I had a score of A+ from SSL Labs. I was feeling downright smug. Then I got emails from actual users. One had picked up an old Samsung tablet second hand. Another was using an older version of Safari. Neither could access the site.

Sure enough, if you cut off TLS 1.0, you cut off Safari below version six.

Alright, then. Can’t they just upgrade? Well …no. Apple has tied Safari to OS X. If you can’t upgrade your operating system, you can’t upgrade your browser. So if you’re using OS X Mountain Lion, you’re stuck with an insecure version of Safari.

Fortunately, you can use a different browser. It’s possible to install, say, Firefox 37 which supports TLS 1.2.

On desktop, that is. If you’re using an older iPhone or iPad and you can’t upgrade to a recent version of iOS, you’re screwed.

This isn’t an edge case. This is exactly the kind of usage that iPads excel at: you got the device a few years back just to do some web browsing and not much else. It still seems to work fine, and you have no incentive to buy a brand new iPad. And nor should you have to.

In that situation, you’re stuck using an insecure browser.

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone. (That’s what I’ve done on The Session and now my score is capped at B.)

What I can’t do is tell you to install a different browser, because you literally can’t. Sure, technically you can install something called Firefox from the App Store, or you can install something called Chrome. But neither have anything to do with their desktop counterparts. They’re differently skinned versions of Safari.

Apple refuses to allow browsers with any other rendering engine to be installed. Their reasoning?

Security.

Accessibility on The Session revisited

Earlier this year, I wrote about an accessibility issue I was having on The Session. Specifically, it was an issue with Ajax and pagination. But I managed to sort it out, and the lesson was very clear:

As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

Well, fast forward to the past few weeks, when I was contacted by one of the screen-reader users on The Session. There was, once again, a problem with the Ajax pagination, specifically with VoiceOver on iOS. The first page of results were read out just fine, but subsequent pages were not only never announced, the content was completely unavailable. The first page of results would’ve been included in the initial HTML, but the subsequent pages of results are injected with JavaScript (if JavaScript is available—otherwise it’s regular full-page refreshes all the way).

This pagination pattern shows up all over the site: lists of what’s new, search results, and more. I turned on VoiceOver and I was able to reproduce the problem straight away.

I started pulling apart my JavaScript looking for the problem. Was it something to do with how I was handling focus? I just couldn’t figure it out. And other parts of the site that used Ajax didn’t seem to be having the same problem. I was mystified.

Finally, I tracked down the problem, and it wasn’t in the JavaScript at all.

Wherever the pagination pattern appears, there are “previous” and “next” links, marked up with the appropriate rel="prev" and rel="next" attributes. Well, apparently past me thought it would be clever to add some ARIA attributes in there too. My thinking must’ve been something like this:

  • Those links control the area of the page with the search results.
  • That area of the page has an ID of “results”.
  • I should add aria-controls="results" to those links.

That was the problem …which is kind of weird, because VoiceOver isn’t supposed to have any support for aria-controls. Anyway, once I removed that attribute from the links, everything worked just fine.

Just as the solution last time was to remove the aria-atomic attribute on the updated area, the solution this time was to remove the aria-controls attribute on the links that trigger the update. Maybe this time I’ll learn my lesson: don’t mess with ARIA attributes you don’t understand.

The Weight of the WWWorld is Up to Us by Patty Toland

It’s Patty Toland’s first time at An Event Apart! She’s from the fantabulous Filament Group. They’re dedicated to making the web work for everyone.

A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.

At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.

A month later there was a lot of coverage of Syrian refugees arriving in Europe. People were outraged to see refugees carrying smartphones as though that somehow showed they weren’t in a desperate situation. But smartphones are absolutely a necessity in that situation, and most of the phones were less expensive, lower-end devices. Refugeeinfo.eu was a useful site for people in crisis, but the navigation was designed to require JavaScript.

When people thing about mobile, they think about freedom and mobility. But with that JavaScript decision, the developers piled baggage on to the users.

There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.

Pew Research provided a lot of data back then that showed that this thinking was wrong. Use of cell phones, especially smartphones and tablets, escalated dramatically in the United States. There was a trend towards mobile-only usage. This was in low-income households—about one third of the population. Among 5,400 panelists, 15% did not have a JavaScript-enabled device.

Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.

There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.

At most 37% of people have unlimited data. Most people run out of data on a frequent basis.

Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.

This shows that network issues are definitely not just a third world challenge.

On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.

Are we helping in or dumping in? If you look at the data for the average web page size (2 megabytes), we are definitely dumping in. The size of third-party JavaScript has octupled.

There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.

Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.

In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.

What should we be doing?

Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.

Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.

Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.

Test on real devices that real people are using. See Alex Russell’s data on the differences between the kind of devices we use and typical low-end devices. We literally need to stop people in JavaScript.

Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.

If you have some power, then your job is to empower somebody else.

—Toni Morrison

Drag’n’drop revisited

I got a message from a screen-reader user of The Session recently, letting me know of a problem they were having. I love getting any kind of feedback around accessibility, so this was like gold dust to me.

They pointed out that the drag’n’drop interface for rearranging the order of tunes in a set was inaccessible.

Drag and drop

Of course! I slapped my forehead. How could I have missed this?

It had been a while since I had implemented that functionality, so before even looking at the existing code, I started to think about how I could improve the situation. Maybe I could capture keystroke events from the arrow keys and announce changes via ARIA values? That sounded a bit heavy-handed though: mess with people’s native keyboard functionality at your peril.

Then I looked at the code. That was when I realised that the fix was going to be much, much easier than I thought.

I documented my process of adding the drag’n’drop functionality back in 2016. Past me had his progressive enhancement hat on:

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

The problem was in my feature detection:

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled.

The logic was fine, but the execution was flawed. I was being lazy and hiding the select elements with display: none. That hides them visually, but it also hides them from screen readers. I swapped out that style declaration for one that visually hides the elements, but keeps them accessible and focusable.

It was a very quick fix. I had the odd sensation of wanting to thank Past Me for making things easy for Present Me. But I don’t want to talk about time travel because if we start talking about it then we’re going to be here all day talking about it, making diagrams with straws.

I pushed the fix, told the screen-reader user who originally contacted me, and got a reply back saying that everything was working great now. Success!

Accessibility on The Session

I spent some time this weekend working on an accessibility issue over on The Session. Someone using VoiceOver on iOS was having a hard time with some multi-step forms.

These forms have been enhanced with some Ajax to add some motion design: instead of refreshing the whole page, the next form is grabbed from the server while the previous one swooshes off the screen.

You can see similar functionality—without the animation—wherever there’s pagination on the site.

The pagination is using Ajax to enhance regular prev/next links—here’s the code.

The multi-step forms are using Ajax to enhance regular form submissions—here’s the code for that.

Both of those are using a wrapper I wrote for XMLHttpRequest.

That wrapper also adds some ARIA attributes. The region of the page that will be updated gets an aria-live value of polite. Then, whenever new content is being injected, the same region gets an aria-busy value of true. Once the update is done, the aria-busy value gets changed back to false.

That all seems to work fine, but I was also giving the same region of the page an aria-atomic value of true. My thinking was that, because the whole region was going to be updated with new content from the server, it was safe to treat it as one self-contained unit. But it looks like this is what was causing the problem, especially when I was also adding and removing class values on the region in order to trigger animations. VoiceOver seemed to be getting a bit confused and overly verbose.

I’ve removed the aria-atomic attribute now. True to its name, I’m guessing it’s better suited to small areas of a document rather than big chunks. (If anyone has a good primer on when to use and when to avoid aria-atomic, I’m all ears).

I was glad I was able to find a fix—hopefully one that doesn’t negatively impact the experience in other screen readers. As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

It also led to a nice discussion with some of the screen-reader users on The Session.

For me, all of this really highlights the beauty of the web, when everyone is able to contribute to a community like The Session, regardless of what kind of software they may be using. In the tunes section, that’s really helped by the use of ABC notation, as I wrote five years ago:

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

Then, for those of use who can read sheet music, the text of the ABC notation is automatically turned into an SVG image using the brilliant abcjs. It’s like an enhancement that’s applied, I dunno, what’s the word …progressively.

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Alternative analytics

Contrary to the current consensual hallucination, there are alternatives to Google Analytics.

I haven’t tried Open Web Analytics. It looks a bit geeky, but the nice thing about it is that you can set it up to work with JavaScript or PHP (sort of like Mint, which I miss).

Also on the geeky end, there’s GoAccess which provides an interface onto your server logs. You can view the data in a browser or on the command line. I gave this a go on adactio.com and it all worked just fine.

Matomo was previously called Piwik, and it’s the closest to Google Analytics. Chris Ruppel wrote about using it as a drop-in replacement. I gave it a go on adactio.com and it did indeed collect analytics very nicely …but then I deleted it, because it still felt creepy to have any kind of analytics script at all (neither Huffduffer or The Session have any analytics tracking either).

Fathom isn’t out yet, but it looks interesting:

It will track users on a website, the key actions they are taking, and give you a non-nerdy breakdown of their journey. It’ll do so with user-centric rights and privacy, and without selling, sharing or giving away the data you collect.

I don’t think any of these alternatives offer quite the same ease-of-use that you’d get from Google Analytics. But I also don’t think that should be your highest priority. There’s a fundamental difference between doing your own analytics (self-hosted), and outsourcing the job to Google who can then track your site’s visitors across domains.

I was hoping that GDPR would put the squeeze on third-party tracking, but it looks like Google have found a way out. By declaring themselves a data controller (but not a data processor), they pass can pass the buck to the data processors to obtain consent.

If you have Google Analytics on your site, that’s you, that is.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.