Tags: cc

65

sparkline

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.

An nth-letter selector in CSS

Variable fonts are a very exciting and powerful new addition to the toolbox of web design. They was very much at the centre of discussion at this year’s Ampersand conference.

A lot of the demonstrations of the power of variable fonts are showing how it can be used to make letter-by-letter adjustments. The Ampersand website itself does this with the logo. See also: the brilliant demos by Mandy. It’s getting to the point where logotypes can be sculpted and adjusted just-so using CSS and raw text—no images required.

I find this to be thrilling, but there’s a fly in the ointment. In order to style something in CSS, you need a selector to target it. If you’re going to style individual letters, you need to wrap each one in an HTML element so that you can then select it in CSS.

For the Ampersand logo, we had to wrap each letter in a span (and then, becuase that might cause each letter to be read out individually instead of all of them as a single word, we applied some ARIA shenanigans to the containing element). There’s even a JavaScript library—Splitting.js—that will do this for you.

But if the whole point of using HTML is that the content is accessible, copyable, and pastable, isn’t a bit of a shame that we then compromise the markup—and the accessibility—by wrapping individual letters in presentational tags?

What if there were an ::nth-letter selector in CSS?

There’s some prior art here. We’ve already got ::first-letter (and now the initial-letter property or whatever it ends up being called). If we can target the first letter in a piece of text, why not the second, or third, or nth?

It raises some questions. What constitutes a letter? Would it be better if we talked about ::first-character, ::initial-character, ::nth-character, and so on?

Even then, there are some tricksy things to figure out. What’s the third character in this piece of markup?

<p>AB<span>CD</span>EF</p>

Is it “C”, becuase that’s the third character regardless of nesting? Or is it “E”, becuase techically that’s the third character token that’s a direct child of the parent element?

I imagine that implementing ::nth-letter (or ::nth-character) would be quite complex so there would probably be very little appetite for it from browser makers. But it doesn’t seem as problematic as some selectors we’ve already got.

Take ::first-line, for example. That violates one of the biggest issues in adding new CSS selectors: it’s a selector that depends on layout.

Think about it. The browser has to first calculate how many characters are in the first line of an element (like, say, a paragraph). Having figured that out, the browser can then apply the styles declared in the ::first-line selector. But those styles may involve font sizing updates that changes the number of characters in the first line. Paradox!

(Technically, only a subset of CSS of properties can be applied to ::first-line, but that subset includes font-size so the paradox remains.)

I checked to see if ::first-line was included in one of my favourite documents: Incomplete List of Mistakes in the Design of CSS. It isn’t.

So compared to the logic-bending paradoxes of ::first-line, an ::nth-letter selector would be relatively straightforward. But that in itself isn’t a good enough reason for it to exist. As the CSS Working Group FAQs say:

The fact that we’ve made one mistake isn’t an argument for repeating the mistake.

A new selector needs to solve specific use cases. I would argue that all the letter-by-letter uses of variable fonts that we’re seeing demonstrate the use cases, and the number of these examples is only going to increase. The very fact that JavaScript libraries exist to solve this problem shows that there’s a need here (and we’ve seen the pattern of common JavaScript use-cases ending up in CSS before—rollovers, animation, etc.).

Now, I know that browser makers would like us to figure out how proposed CSS features should work by polyfilling a solution with Houdini. But would that work for a selector? I don’t know much about Houdini so I asked Una. She pointed me to a proposal by Greg and Tab for a full-on parser in Houdini. But that’s a loooong way off. Until then, we must petition our case to the browser gods.

This is not a new suggestion.

Anne Van Kesteren proposed ::nth-letter way back in 2003:

While I’m talking about CSS, I would also like to have ::nth-line(n), ::nth-letter(n) and ::nth-word(n), any thoughts?

Trent called for ::nth-letter in January 2011:

I think this would be the ideal solution from a web designer’s perspective. No javascript would be required, and 100% of the styling would be handled right where it should be—in the CSS.

Chris repeated the call in October of 2011:

Of all of these “new” selectors, ::nth-letter is likely the most useful.

In 2012, Bram linked to a blog post (now unavailable) from Adobe indicating that they were working on ::nth-letter for Webkit. That was the last anyone’s seen of this elusive pseudo-element.

In 2013, Chris (again) included ::nth-letter in his wishlist for CSS. So say we all.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

Name That Script! by Trent Walton

Trent is about to pop his AEA cherry and give a talk at An Event Apart in Boston. I’m going to attempt to liveblog this:

How many third-party scripts are loading on our web pages these days? How can we objectively measure the value of these (advertising, a/b testing, analytics, etc.) scripts—considering their impact on web performance, user experience, and business goals? We’ve learned to scrutinize content hierarchy, browser support, and page speed as part of the design and development process. Similarly, Trent will share recent experiences and explore ways to evaluate and discuss the inclusion of 3rd-party scripts.

Trent is going to speak about third-party scripts, which is funny, because a year ago, he never would’ve thought he’d be talking about this. But he realised he needed to pay more attention to:

any request made to an external URL.

Or how about this:

A resource included with a web page that the site owner doesn’t explicitly control.

When you include a third-party script, the third party can change the contents of that script.

Here are some uses:

  • advertising,
  • A/B testing,
  • analytics,
  • social media,
  • content delivery networks,
  • customer interaction,
  • comments,
  • tag managers,
  • fonts.

You get data from things like analytics and A/B testing. You get income from ads. You get content from CDNs.

But Trent has concerns. First and foremost, the user experience effects of poor performance. Also, there are the privacy implications.

Why does Trent—a designer—care about third party scripts? Well, over the years, the areas that Trent pays attention to has expanded. He’s progressed from image comps to frontend to performance to accessibility to design systems to the command line and now to third parties. But Trent has no impact on those third-party scripts. That’s very different to all those other areas.

Trent mostly builds prototypes. Those then get handed over for integration. Sometimes that means hooking it up to a CMS. Sometimes it means adding in analytics and ads. It gets really complex when you throw in third-party comments, payment systems, and A/B testing tools. Oftentimes, those third-party scripts can outweigh all the gains made beforehand. It happens with no discussion. And yet we spent half a meeting discussing a border radius value.

Delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

Trent has spent the last few months learning about third parties so he can be better equiped to discuss them.

UX, performance and privacy impact

We feel the UX impact every day we browse the web (if we turn off our content blockers). The Food Network site has an intersitial asking you to disable your ad blocker. They promise they won’t spawn any pop-up windows. Trent turned his ad blocker off—the page was now 15 megabytes in size. And to top it off …he got a pop up.

Privacy can harder to perceive. We brush aside cookie notifications. What if the wording was “accept trackers” instead of “accept cookies”?

Remarketing is that experience when you’re browsing for a spatula and then every website you visit serves you ads for spatula. That might seem harmless but allowing access to our browsing history has serious privacy implications.

Web builders are on the front lines. It’s up to us to advocate for data protection and privacy like we do for web standards. Don’t wait to be told.

Categories of third parties

Ghostery categories third-party providers: advertising, comments, customer interaction, essential, site analytics, social media. You can dive into each layer and see the specific third-party services on the page you’re viewing.

Analyse and itemise third-party scripts

We have “view source” for learning web development. For third parties, you need some tool to export the data. HAR files (HTTP ARchive) are JSON files that you can create from most browsers’ network request panel in dev tools. But what do you do with a .har file? The site har.tech has plenty of resources for you. That’s where Trent found the Mac app, Charles. It can open .har files. Best of all, you can export to CSV so you can share spreadsheets of the data.

You can visualise third-party requests with Simon Hearne’s excellent Request Map. It’s quite impactful for delivering a visceral reaction in a meeting—so much more effective than just saying “hey, we have a lot of third parties.” Request Map can also export to CSV.

Know industry averages

Trent wanted to know what was “normal.” He decided to analyse HAR files for Alexa’s top 50 US websites. The result was a massive spreadsheet of third-party providers. There were 213 third-party domains (which is not even the same as the number of requests). There was an average of 22 unique third-party domains per site. The usual suspects were everywhere—Google, Amazon, Facebook, Adobe—but there were many others. You can find an alphabetical index on better.fyi/trackers. Often the lesser-known domains turn out to be owned by the bigger domains.

News sites and shopping sites have the most third-party scripts, unsurprisingly.

Understand benefits

Trent realised he needed to listen and understand why third-party scripts are being included. He found out what tag managers do. They’re funnels that allow you to cram even more third-party scripts onto your website. Trent worried that this was a Pandora’s box. The tag manager interface is easy to access and use. But he was told that it’s more like a way of organising your third-party scripts under one dashboard. But still, if you get too focused on the dashboard, you could lose focus of the impact on load times. So don’t blame the tool: it’s all about how it’s used.

Take action

Establish a centre of excellence. Put standards in place—in a cross-discipline way—to define how third-party scripts are evaluated. For example:

  1. Determine the value to the business.
  2. Avoid redundant scripts and services.
  3. Fit within the established performance budget.
  4. Comply with the organistional privacy policy.

Document those decisions, maybe even in your design system.

Also, include third-party scripts within your prototypes to get a more accurate feel for the performance implications.

On a live site, you can regularly audit third-party scripts on a regular basis. Check to see if any are redundant or if they’re exceeding the performance budget. You can monitor performance with tools like Calibre and Speed Curve to cover the time in between audits.

Make your case

Do competitive analysis. Look at other sites in your sector. It’s a compelling way to make a case for change. WPO Stats is very handy for anecdata.

You can gather comparative data with Web Page Test: you can run a full test, and you can run a test with certain third parties blocked. Use the results to kick off a discussion about the impact of those third parties.

Talk it out

Work to maintain an ongoing discussion with the entire team. As Tim Kadlec says:

Everything should have a value, because everything has a cost.

Why Design Systems Fail by Una Kravets

I’m at An Event Apart Boston where Una is about to talk about design systems, following on from Yesenia’s excellent design systems talk. I’m going to attempt to liveblog it…

Una works at the Bustle Digital Group, which publishes a lot of different properties. She used to work at Watson, at Bluemix and at Digital Ocean. They all have something in common (other than having blue in their logos). They all had design systems that failed.

Design systems are so hot right now. They allow us think in a componentised way, and grow quickly. There are plenty of examples out there, like Polaris from Shopify, the Lightning design sytem from Salesforce, Garden from Zendesk, Gov.uk, and Code For America. Check out Anna’s excellent styleguides.io for more examples.

What exactly is a design system?

It’s a broad term. It can be a styleguide or visual pattern library. It can be design tooling (like a Sketch file). It can be a component library. It can be documentation of design or development usage. It can be voice and tone guidelines.

Styleguides

When Una was in College, she had a print rebranding job—letterheads, stationary, etc. She also had to provide design guidelines. She put this design guide on the web. It had colours, heading levels, type, logo treatments, and so on. It wasn’t for an application, but it was a design system.

Design tooling

Primer by Github is a good example of this. You can download pre-made icons, colours, etc.

Component library

This is where you get the code. Una worked on BUI at Digital Ocean, which described the interaction states of each components and how to customise interactions with JavaScript.

Code usage guidelines

AirBnB has a really good example of this. It’s a consistent code style. You can even include it in your build step with eslint-config-airbnb and stylelint-config-airbnb.

Design usage documentation

Carbon by IBM does a great job of this. It describes the criteria for deciding when to use a pattern. It’s driven by user experience considerations. They also have general guidelines on loading in components—empty states, etc. And they include animation guidelines (separately from Carbon), built on the history of IBM’s magnetic tape machines and typewriters.

Voice and tone guidelines

Of course Mailchimp is the classic example here. They break up voice and tone. Voice is not just what the company is, but what the company is not:

  • Fun but not silly,
  • Confident but not cocky,
  • Smart but not stodgy,
  • and so on.

Voiceandtone.com describes the user’s feelings at different points and how to communicate with them. There are guidelines for app users, and guidelines for readers of the company newsletter, and guidelines for readers of the blog, and so on. They even have examples of when things go wrong. The guidelines provide tips on how to help people effectively.

Why do design systems fail?

Una now asks who in the room has ever started a diet. And who has ever finished a diet? (A lot of hands go down).

Nobody uses it

At Digital Ocean, there was a design system called Buoy version 1. Una helped build a design system called Float. There was also a BUI version 2. Buoy was for product, Float was for the marketing site. Classic example of 927. Nobody was using them.

Una checked the CSS of the final output and the design system code only accounted for 28% of the codebase. Most of the CSS was over-riding the CSS in the design system.

Happy design systems scale good standards, unify component styles and code and reduce code cruft. Why were people adding on instead of using the existing sytem? Because everyone was being judged on different metrics. Some teams were judged on shipping features rather than producing clean code. So the advantages of a happy design systems don’t apply to them.

Investment

It’s like going to the gym. Small incremental changes make a big difference over the long term. If you just work out for three months and then stop, you’ll lose all your progress. It’s like that with design systems. They have to stay in sync with the live site. If you don’t keep it up to date, people just won’t use it.

It’s really important to have a solid core. Accessibility needs to be built in from the start. And the design system needs ownership and dedicated commitment. That has to come from the organisation.

You have to start somewhere.

Communication

Communication is multidimensional; it’s not one-way. The design system owner (or team) needs to act as a bridge between designers and developers. Nobody likes to be told what to do. People need to be involved, and feel like their needs are being addressed. Make people feel like they have control over the process …even if they don’t; it’s like perceived performance—this is perceived involvement.

Ask. Listen. Make your users feel heard. Incorporate feedback.

Buy-in

Good communication is important for getting buy-in from the people who will use the design system. You also need buy-in from the product owners.

Showing is more powerful than telling. Hackathans are like candy to a budding design system—a chance to demonstrate the benefits of a design system (and get feedback). After a hackathon at Digital Ocean, everyone was talking about the design system. Weeks afterwards, one of the developers replaced Bootstrap with BUI, removing 20,000 lines of code! After seeing the impact of a design system, the developers will tell their co-workers all about it.

Solid architecture

You need to build with composability and change in mind. Primer, by Github, has a core package, and then add-ons for, say, marketing or product. That separation of concerns is great. BUI used a similar module-based approach: a core codebase, separate from iconography and grid.

Semantic versioning is another important part of having a solid architecture for your design system. You want to be able to push out minor updates without worrying about breaking changes.

Major.Minor.Patch

Use the same convention in your design files, like Sketch.

What about tech stack choice? Every company has different needs, but one thing Una recommends is: don’t wait to namespace! All your components should have some kind of prefix in the class names so they don’t clash with existing CSS.

Una mentions Solid by Buzzfeed, which I personally think is dreadful (count the number of !important declarations—you can call it “immutable” all you want).

AtlasKit by Atlassian goes all in on React. They’re trying to integrate Sketch into it, but design tooling isn’t solved yet (AirBnB are working on this too). We’re still trying to figure out how to merge the worlds of design and code.

Reduce Friction

This is what it’s all about. Using the design system has to be the path of least resistance. If the new design system is harder to use than what people are already doing, they won’t use it.

Provide hooks and tools for the people who will be using the design system. That might be mixins in Sass or it might be a script on a CDN that people can just link to.

Start early, update often. Design systems can be built retrospectively but it’s easier to do it when a new product is being built.

Bugs and cruft always increase over time. You need a mechanism in place to keep on top of it. Not just technical bugs, but visual inconsistencies.

So the five pillars of ensuring a successful design system are:

  1. Investment
  2. Communication
  3. Buy-in
  4. Solid architecture
  5. Reduce friction

When you’re starting, begin with a goal:

We are building a design system because…

Then review what you’ve already got (your existing codebase). For example, if the goal of having a design system is to increase page performance, use Web Page Test to measure how the current site is performing. If the goal is to reduce accessibility problems, use webaim.org to measure the accessibility of your current site (see also: pa11y). If the goal is to reduce the amount of CSS in your codebase, use cssstats.com to test how your current site is doing. Now that you’ve got stats, use them to get buy-in. You can also start by doing an interface inventory. Print out pages and cut them up.

Once you’ve got buy-in and commitment (in writing), then you can make technical decisions.

You can start with your atomic elements. Buttons are like the “Hello world!” of design systems. You’ve colours, type, and different states.

Then you can compose elements by putting the base elements together.

Do you include layout in the system? That’s a challenge, and it depends on your team. If you do include layout, to what extent?

Regardless of layout, you still need to think about space: the space between base elements within a component.

Bake in accessibility: every hover state should have an equal (not opposite) focus state.

Think about states, like loading states.

Then you can start documenting. Then inform the users of the system. Carbon has a dashboard showing which components are new, which components are deprecated, and which components are being updated.

Keep consistent communication. Design and dev communication has to happen. Continuous iteration, support and communication are the most important factors in the success of a design system. Code is only 10% of a sytem.

Also, don’t feel like you need to copy other design systems out there. Your needs are probably very different. As Diana says, comparing your design system to the polished public ones is like comparing your life to someone’s Instagram account. To that end, Una says something potentially contraversial:

You might not need a design sytem.

If you’re the only one at your organisation that cares about the benefits of a design system, you won’t get buy-in, and if you don’t get buy-in, the design system will fail. Maybe there’s something more appropriate for your team? After all, not everyone needs to go to the gym to get fit. There are alternatives.

Find what works for you and keep at it.

Detecting image requests in service workers

In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog or /articles differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.

One of the ways to check what kind of request you’re dealing with is to see what’s in the accept header. Here’s how I show the test for HTML pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Handle your page requests here.
}

So, logically enough, I show the same technique for detecting image requests:

if (request.headers.get('Accept').includes('image')) {
    // Handle your image requests here.
}

That should catch any files that have image in the request’s accept header, like image/png or image/jpeg or image/svg+xml and so on.

But there’s a problem. Both Safari and Firefox now use a much broader accept header: */*

My if statement evaluates to false in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
    // Handle your image requests here.
}

So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:

if (request.headers.get('Accept').includes('image'))

Swap it out for:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/))

And feel to add any other image file extensions (like webp) in there too.

Alternative analytics

Contrary to the current consensual hallucination, there are alternatives to Google Analytics.

I haven’t tried Open Web Analytics. It looks a bit geeky, but the nice thing about it is that you can set it up to work with JavaScript or PHP (sort of like Mint, which I miss).

Also on the geeky end, there’s GoAccess which provides an interface onto your server logs. You can view the data in a browser or on the command line. I gave this a go on adactio.com and it all worked just fine.

Matomo was previously called Piwik, and it’s the closest to Google Analytics. Chris Ruppel wrote about using it as a drop-in replacement. I gave it a go on adactio.com and it did indeed collect analytics very nicely …but then I deleted it, because it still felt creepy to have any kind of analytics script at all (neither Huffduffer or The Session have any analytics tracking either).

Fathom isn’t out yet, but it looks interesting:

It will track users on a website, the key actions they are taking, and give you a non-nerdy breakdown of their journey. It’ll do so with user-centric rights and privacy, and without selling, sharing or giving away the data you collect.

I don’t think any of these alternatives offer quite the same ease-of-use that you’d get from Google Analytics. But I also don’t think that should be your highest priority. There’s a fundamental difference between doing your own analytics (self-hosted), and outsourcing the job to Google who can then track your site’s visitors across domains.

I was hoping that GDPR would put the squeeze on third-party tracking, but it looks like Google have found a way out. By declaring themselves a data controller (but not a data processor), they pass can pass the buck to the data processors to obtain consent.

If you have Google Analytics on your site, that’s you, that is.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Getting griddy with it

I had the great pleasure of attending An Event Apart Seattle last week. It was, as always, excellent.

It’s always interesting to see themes emerge during an event, especially when those thematic overlaps haven’t been planned in advance. Jen noticed this one:

I remember that being a theme at An Event Apart San Francisco too, when it seemed like every speaker had words to say about ill-judged use of Bootstrap. That theme was certainly in my presentation when I talked about “the fallacy of assumed competency”:

  1. large company X uses technology Y,
  2. company X must know what they are doing because they are large,
  3. therefore technology Y must be good.

Perhaps “the fallacy of assumed suitability” would be a better term. Heydon calls it “the ‘made at Facebook’ fallacy.” But I also made sure to contrast it with the opposite extreme: “Not Invented Here syndrome”.

As well as over-arching themes, it was also interesting to see which technologies were hot topics at An Event Apart. There was one clear winner here—CSS Grid Layout.

Microsoft—a sponsor of the event—used An Event Apart as the place to announce that Grid is officially moving into development for Edge. Jen talked about Grid (of course). Rachel talked about Grid (of course). And while Eric and Una didn’t talk about it on stage, they’ve both been writing about the fun they’ve been having having with Grid. Una wrote about 3 CSS Grid Features That Make My Heart Flutter. Eric is documenting the overall of his site with Grid. So when we were all gathered together, that’s what we were nerding out about.

The CSS Squad.

There are some great resources out there for levelling up in Grid-fu:

With Jen’s help, I’ve been playing with CSS Grid on a little site that I’m planning to launch tomorrow (he said, foreshadowingly). I took me a while to get my head around it, but once it clicked I started to have a lot of fun. “Fun” seems to be the overall feeling around this technology. There’s something infectious about the excitement and enthusiasm that’s returning to the world of layout on the web. And now that the browser support is great pretty much across the board, we can start putting that fun into production.

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

The many formats of Resilient Web Design

If you don’t like reading in a web browser, you might like to know that Resilient Web Design is now available in more formats.

Jiminy Panoz created a lovely EPUB version. I tried it out in Apple’s iBooks app and it looks great. I tried to submit it to the iBooks store too, but that process threw up a few too many roadblocks.

Oliver Williams has created a MOBI version. That’s means you can read it on a Kindle. I plugged my old Kindle into my computer, dragged that file onto its disc image, and it worked a treat.

And there’s always the PDF versions; one in portrait and another in landscape format. Those were generated straight from the print styles.

Oh, and there’s the podcast. I’ve only released two chapters so far. The Christmas break and an untimely cold have slowed down the release schedule a little bit.

I’d love to make a physical, print-on-demand version of Resilient Web Design available—maybe through Lulu—but my InDesign skills are non-existent.

If you think the book should be available in any other formats, and you fancy having a crack at it, please feel free to use the source files.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

A decade on Twitter

I wrote my first tweet ten years ago.

That’s the tweetiest of tweets, isn’t it? (and just look at the status ID—only five digits!)

Of course, back then we didn’t call them tweets. We didn’t know what to call them. We didn’t know what to make of this thing at all.

I say “we”, but when I signed up, there weren’t that many people on Twitter that I knew. Because of that, I didn’t treat it as a chat or communication tool. It was more like speaking into the void, like blogging is now. The word “microblogging” was one of the terms floating around, grasped by those of trying to get to grips with what this odd little service was all about.

Twenty days after I started posting to Twitter, I wrote about how more and more people that I knew were joining :

The usage of Twitter is, um, let’s call it… emergent. Whenever I tell anyone about it, their first question is “what’s it for?”

Fair question. But their isn’t really an answer. You send messages either from the website, your mobile phone, or chat. What you post and why you’d want to do it is entirely up to you.

I was quite the cheerleader for Twitter:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

“Twittering.” Don’t laugh. “Tweeting” sounded really silly at first too.

Now at this point, I could start reminiscing about how much better things were back then. I won’t, but it’s interesting to note just how different it was.

  • The user base was small enough that there was a public timeline of all activity.
  • The characters in your username counted towards your 140 characters. That’s why Tantek changed his handle to be simply “t”. I tried it for a day. I think I changed my handle to “jk”. But it was too confusing so I changed it back.
  • We weren’t always sure how to write our updates either—your username would appear at the start of the message, so lots of us wrote our updates in the third person present (Brian still does). I’m partial to using the present continuous. That was how I wrote my reaction to Chris’s weird idea for tagging updates.

I think about that whenever I see a hashtag on a billboard or a poster or a TV screen …which is pretty much every day.

At some point, Twitter updated their onboarding process to include suggestions of people to follow, subdivided into different categories. I ended up in the list of designers to follow. Anil Dash wrote about the results of being listed and it reflects my experience too. I got a lot of followers—it’s up to around 160,000 now—but I’m pretty sure most of them are bots.

There have been a lot of changes to Twitter over the years. In the early days, those changes were driven by how people used the service. That’s where the @-reply convention (and hashtags) came from.

Then something changed. The most obvious sign of change was the way that Twitter started treating third-party developers. Where they previously used to encourage and even promote third-party apps, the company began to crack down on anything that didn’t originate from Twitter itself. That change reflected the results of an internal struggle between the people at Twitter who wanted it to become an open protocol (like email), and those who wanted it to become a media company (like Yahoo). The media camp won.

Of course Twitter couldn’t possibly stay the same given its incredible growth (and I really mean incredible—when it started to appear in the mainstream, in films and on TV, it felt so weird: this funny little service that nerds were using was getting popular with everyone). Change isn’t necessarily bad, it’s just different. Your favourite band changed when they got bigger. South by Southwest changed when it got bigger—it’s not worse now, it’s just very different.

Frank described the changing the nature of Twitter perfectly in his post From the Porch to the Street:

Christopher Alexander made a great diagram, a spectrum of privacy: street to sidewalk to porch to living room to bedroom. I think for many of us Twitter started as the porch—our space, our friends, with the occasional neighborhood passer-by. As the service grew and we gained followers, we slid across the spectrum of privacy into the street.

I stopped posting directly to Twitter in May, 2014. Instead I now write posts on my site and then send a copy to Twitter. And thanks to the brilliant Brid.gy, I get replies, favourites and retweets sent back to my own site—all thanks to Webmention, which just become a W3C proposed recommendation.

It’s hard to put into words how good this feels. There’s a psychological comfort blanket that comes with owning your own data. I see my friends getting frustrated and angry as they put up with an increasingly alienating experience on Twitter, and I wish I could explain how much better it feels to treat Twitter as nothing more than a syndication service.

When Twitter rolls out changes these days, they certainly don’t feel like they’re driven by user behaviour. Quite the opposite. I’m currently in the bucket of users being treated to new @-reply behaviour. Tressie McMillan Cottom has written about just how terrible the new changes are. You don’t get to see any usernames when you’re writing a reply, so you don’t know exactly how many people are going to be included. And if you mention a URL, the username associated with that website may get added to the tweet. The end result is that you write something, you publish it, and then you think “that’s not what I wrote.” It feels wrong. It robs you of agency. Twitter have made lots of changes over the years, but this feels like the first time that they’re going to actively edit what you write, without your permission.

Maybe this is the final straw. Maybe this is the change that will result in long-time Twitter users abandoning the service. Maybe.

Me? Well, Twitter could disappear tomorrow and I wouldn’t mind that much. I’d miss seeing updates from friends who don’t have their own websites, but I’d carry on posting my short notes here on adactio.com. When I started posting to Twitter ten years ago, I was speaking (or microblogging) into the void. I’m still doing that ten years on, but under my terms. It feels good.

I’m not sure if my Twitter account will still exist ten years from now. But I’m pretty certain that my website will still be around.

And now, if you don’t mind…

I’m off to grab some lunch.

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Enhance! Conf!

Two weeks from now there will be an event in London. You should go to it. It’s called EnhanceConf:

EnhanceConf is a one day, single track conference covering the state of the art in progressive enhancement. We will look at the tools and techniques that allow you to extend the reach of your website/application without incurring additional costs.

As you can probably guess, this is right up my alley. Wild horses wouldn’t keep me away from it. I’ve been asked to be Master of Ceremonies for the day, which is a great honour. Luckily I have some experience in that department from three years of hosting Responsive Day Out. In fact, EnhanceConf is going to run very much in the mold of Responsive Day Out, as organiser Simon explained in an interview with Aaron.

But the reason to attend is of course the content. Check out that line-up! Now that is going to be a knowledge-packed day: design, development, accessibility, performance …these are a few of my favourite things. Nat Buckley, Jen Simmons, Phil Hawksworth, Anna Debenham, Aaron Gustafson …these are a few of my favourite people.

Tickets are still available. Use the discount code JEREMYK to get a whopping 15% off the ticket price.

There’s also a scholarship:

The scholarships are available to anyone not normally able to attend a conference.

I’m really looking forward to EnhanceConf. See you at RSA House on March 4th!