Tuesday, March 11th, 2014

Return of the Responsive Day Out

When we decided to put on last year’s Responsive Day Out, it was a fairly haphazard, spur-of-the-moment affair. Well, when I say “spur of the moment”, I mean there was just three short months between announcing the event and actually doing it. In event-organising terms, that’s flying by the seat of your pants.

The Responsive Day Out was a huge success—just ask anyone who was there. Despite the lack of any of the usual conference comforts (we didn’t even have badges), everyone really enjoyed the whizz-bang, lickety-split format: four blocks of three back-to-back quickfire 20 minute talks, with each block wrapped up with a short discussion. And the talks were superb …really superb.

It was always intended as a one-off event. But I was surprised by how often people asked when the next one would be, either because they were there and loved it, or because they missed out on getting a ticket but heard how great it was. For a while, I was waving off those questions, saying that we had no plans for another Responsive Day Out. I figured that we had covered quite a lot in that one day, and now we should just be getting on with building the responsive web, right?

But then I started to notice how many companies were only beginning to make the switch to working responsively within the past year. It’s like the floodgates have opened. I’ve been going into companies and doing workshops where I’ve found myself thinking time and time again that these people could really benefit from an event like the Responsive Day Out.

Slowly but surely, the thought of having another Responsive Day Out grew and grew in my mind.

So let’s do this.

On Friday, June 27th, come on down to Brighton for Responsive Day Out 2: Elastic Bugaloo. It will be bloody brilliant.

The format will be mostly the same as last year, with one big change: one of the day’s slots won’t feature three quick back-to-back talks. Instead it will be a keynote presentation by none other than the Responitor himself, Duke Ethan of Marcotte.

There are some other differences from last year. Whereas last year’s speakers all came from within the borders of the UK, this year I’ve invited some supremely talented people from other parts of Europe. You can expect mind-expanding knowledge bombs on workflow, process, front-end technologies, and some case studies.

If you check out the website, you’ll see just some of the speakers I’ve got lined up for you: Stephanie, Rachel, Stephen …but that’s not the full line-up. I’m still gathering together the last few pieces of the day’s puzzle. But I’ve got to say, I’m already ridiculously excited to hear what everyone has to say.

The expanded scope of the line-up means that the ticket price is a bit more this year—last year’s event was laughably cheap—but it’s still a ridiculously low price: just £80 plus VAT, bringing it to a grand total of just £96 all in. That’s unheard of for a line-up of this calibre.

I’m planning to put tickets on sale on two weeks from today on March 25th. Last year’s Responsive Day Out was insanely popular and sold out almost immediately. Make sure you grab your ticket straight away.

To get in the mood, you might want to listen to the podcast or watch the videos from last year.

See you in Brighton on June 27th. This is going to be fun!

(By the way, if your company fancies dropping a few grand to sponsor an after-party for Responsive Day Out 2: The Revenge, let me know. The low-cost, no-frills approach means that right now, there’s no after-party planned, but if your company threw one, they would earn the undying gratitude of hundreds of geeks.)

Friday, March 7th, 2014

Making progress

When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a progress element to indicate to the user that an Ajax request is underway.

I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.

Here’s the JavaScript I use. It’s fairly simplistic, and I’m limiting it to POST requests only. At the moment that a form begins to submit, a progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.

While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.

You’ll notice that I’m attaching an event to each form element, rather than using event delegation to listen for a click event on the parent document and then figuring out whether that click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the required attribute where appropriate—safe in the knowledge that the progess element won’t be generated until the form has passed its validation checks.

If you like this particular pattern, feel free to use the code. Better yet, improve upon it.

Monday, March 3rd, 2014

Brighton workshops

There are some excellent workshops coming up in Brighton soon.

Seb will be teaching a two-day course on CreativeJS Fun and Games on the 13th and 14th of March. I went on one of Seb’s workshops a while back, and it was really, really good. He really does make it fun.

Then on April 2nd, Remy will be teaching a one-day workshop called Debug, and know thy tools. It sounds like it will be an “I know kung-fu” moment for front-end dev tools. Alas, I will be out of the country at An Event Apart Seattle at that time (I say “alas”, but of course I’m really looking forward to getting back to Seattle for An Event Apart).

Anyway, if you’re in Brighton—or looking for a good reason to make a trip to Brighton—I can highly recommend these workshops.

Sunday, March 2nd, 2014

Async, Ajax, and animation

I hadn’t been to one of Brighton’s Async JavaScript meetups for quite a while, but I made it along last week. Now that it’s taking place at 68 Middle Street, it’s a lot easier to get to …seeing as the Clearleft office is right upstairs.

James Da Costa gave a terrific presentation on something called Pjax. In related news, it turns out that the way I’ve been doing Ajax all along is apparently called Pjax.

Back when I wrote Bulletproof Ajax, I talked about using Hijax. The basic idea is:

  1. First, build an old-fashioned website that uses hyperlinks and forms to pass information to the server. The server returns whole new pages with each request.
  2. Now, use JavaScript to intercept those links and form submissions and pass the information via XMLHttpRequest instead. You can then select which parts of the page need to be updated instead of updating the whole page.

So basically your JavaScript is acting like a dumb waiter shuttling requests for page fragments back and forth between the browser and the server. But all the clever stuff is happening on the server, not the browser. To the end user, there’s no difference between that and a site that’s putting all the complexity in the browser.

In fact, the only time you’d really notice a difference is when something goes wrong: in the Hijax model, everything just falls back to full-page requests but keeps on working. That’s the big difference between this approach and the current vogue for “single page apps” that do everything in the browser—when something goes wrong there, the user gets bupkis.

Pjax introduces an extra piece of the puzzle—which didn’t exist when I wrote Bulletproof Ajax—and that’s pushState, part of HTML5’s History API, to keep the browser’s URL updated. Hence, pushState + Ajax = Pjax.

As you can imagine, I was nodding in vigourous agreement with everything James was demoing. It was refreshing to find that not everyone is going down the Ember/Angular route of relying entirely on JavaScript for core functionality. I was beginning to think that nobody cared about progressive enhancement any more, or that maybe I was missing something fundamental, but it turns out I’m not crazy after all: James’s demo showed how to write front-end code responsibly.

What was fascinating though, was hearing why people were choosing to develop using Pjax. It isn’t necessarily that they care about progressive enhancement, robustness, and universal access. Rather, it’s often driven by the desire to stay within the server-side development environment that they’re comfortable with. See, for example, DHH’s explanation of why 37 Signals is using this approach:

So you get all the advantages of speed and snappiness without the degraded development experience of doing everything on the client.

It sounds like they’re doing the right thing for the wrong reasons (a wrong reason being “JavaScript is icky!”).

A lot of James’s talk was focused on the user experience of the interfaces built with Hijax/Pjax/whatever. He had some terrific examples of how animation can make an enormous difference. That inspired me to do a little bit of tweaking to the Ajaxified/Hijaxified/Pjaxified portions of The Session.

Whenever you use Hijax to intercept a link, it’s now up to you to provide some sort of immediate feedback to the user that something is happening—normally the browser would take care of this (remember Netscape’s spinning lighthouse?)—but when you hijack that click, you’re basically saying “I’ll take care of this.” So you could, for example, display a spinning icon.

One little trick I’ve used is to insert an empty progress element.

Normally the progress element takes max and value attributes to show how far along something has progressed:

<progress max="100" value="75">75%</progress>

75%

But if you leave those out, then it’s an indeterminate progess bar:

<progress>loading...</progress>

loading…

The rendering of the progress bar will vary from browser to browser, and that’s just fine. Older browsers that don’t understand the progress will display whatever’s between the opening and closing tags.

Voila! You’ve got a nice lightweight animation to show that an Ajax request is underway.

Wednesday, February 26th, 2014

Continuum

Stop me if you’ve heard this one before. You’re reading an article on Smashing Magazine or A List Apart or some other publication. The article is about a specific feature of CSS, or maybe JavaScript, or perhaps it’s exploring some of the newer additions to HTML. The article is good. It explains how to use this particular feature in your work. Then you read the comments. The first comment is inevitably from someone bemoaning the fact that they can’t use this feature because it isn’t supported in every browser. Specifically, it isn’t supported in some older version of Internet Explorer that they have to support. Therefore the entire article is rendered null and void.

That attitude infuriates and depresses me. It seems to me that it demonstrates a fundamental mismatch between how that person views the job of web development and the way the web actually works.

It is entirely possible—nay, desirable—to use features long before they are supported in every browser. That’s how we move the web forward. If we waited until there was universal support for a feature before we used it, we’d still be using CSS 1.0 and HTML 2.0.

If you use a CSS feature that isn’t supported in a particular browser—like, say, an older version of Internet Explorer—that browser will simply ignore that CSS rule. So you don’t get that rounded corner, or text shadow, or whatever it was. Browsers have the same error-handling mechanism for HTML: if they see something they don’t understand, they just ignore it. The browser will not throw an error. The browser will not stop rendering the page. Browsers are very liberal in what they accept.

It’s a bit trickier with JavaScript: browsers will throw an error; browsers will stop processing the script. That’s why it’s important to use feature detection. That’s also why you definitely don’t want to rely on JavaScript for rendering your content—it’s the most fragile layer of the front-end stack. Note, I’m not saying don’t use JavaScript; I’m saying don’t rely on JavaScript. Otherwise you’ve got yourself a SPOF.

Anyway, my point is—and I can’t believe I still have to repeat this after all these years—websites do not need to look exactly the same in every browser.

“But my client!”, cries the Smashing Magazine commenter, “But my boss!”

If your client or boss expects that a website will look and behave the same in every browser on every device, then where did they get those expectations from? And rather than spending your time trying to meet those impossible expectations, I think your time would be better spent explaining why those expectations don’t match the reality of the web.

It’s like Mike Monteiro says about clients: if they just don’t get something about your design, that’s not their fault; it’s yours. Explaining your design work is part of your design work. It’s the same with web development. It’s our job to explain how the web works …and how the unevenly-distributed nature of browser capabilities is not a bug, it’s a feature.

That was true fourteen years ago when John Allsopp wrote A Dao Of Web Design, and it’s still true today. Back then, designers and developers were comparing the web to print and finding it wanting. These days, designers and developers are comparing the web to native and finding it wanting. In both cases, I feel like they’re missing the fundamental point of the web: you can provide universal access to content and tasks without providing exactly the same experience for every single browser or device. That’s not a failing of the web—that’s its killer app.

Paul Kinlan published a post called This Is the Web Platform where he tabulates the current state of browser support for various features. “Pretty damning” he says:

the feature support that is ubiquitous across the web is actually pretty small especially if you are supporting IE8.

That’s true …from a certain point of view. But it depends on your definition of “support”. If your definition of “support” is “must look and work identically to the latest version of Chrome”, then yes, you’re going to have a smaller set of features you can use (you’re also going to live a miserable existence). But if your definition of “support” is “must be able to access the content and accomplish the task”, then as long as you’re using progressive enhancement, you can use all the features you want and support Internet Explorer 8, 7, 6, 5 …you can support every browser capable of connecting to the internet.

Like Brad said:

There is a difference between support and optimization.

I think part of the problem may be with the language we use. We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.

It’s a subtle Lakoffian thing, but when we talk about “the browser” as if it were a single entity we might be unconsiously reinforcing the expectation that there is one Platonic ideal of browser rendering and that’s what we’re designing for.

There’s another phrase that bothers me, and it’s the phrase that Paul used in the title of his article: “the web platform”. This is something I talked about back in November in my presentation The Power of Simplicity:

But this idea of the web as a platform, I get why from a marketing perspective, we’d want to use that phrase, because it puts the web on equal footing with genuine platforms.

I would say Flash is a platform, and native: iOS and Android and these things. They are platforms, in that it’s all one bundle. And the web isn’t like that.

What I mean is, if you use the Flash platform, then anyone with the Flash plug-in can get your content. It’s on or off. It’s one or zero; it’s binary. Either they have the platform or they don’t. Either they get all your content, or none of your content.

And it’s similar with native apps. If you’ve got the right phone, you can get my app. All of my app. You don’t get bits of my app, you get all my app. Or you get none of it because you don’t have that particular phone that I’m supporting.

And the web is not like that. The web is not binary, one or zero, on or off. It’s not a platform where you get one hundred per cent or zero per cent. It’s this continuum.

The web is not a platform. It’s a continuum.

Friday, February 21st, 2014

Cake for America

The Code for America project at Clearleft was fun for everyone involved. And because the code for the site is open to anyone to contribute to, the fun didn’t have to stop when the project officially wrapped up.

Because of her experience with front-end styleguides, we had hired Anna to work as a front-end developer on the Code for America project. But even when she was officially discharged of her duties, she decided to keep contributing to the public codebase.

For this service above and beyond the call of duty, Mike rewarded her in the time-honoured way: cake! In this case, the cake was inspired by the unofficial project theme song we blasted out every morning in the Clearleft HQ.

Anna's cake Anna's cake

Monday, February 17th, 2014

Launching for America

I’ve already written a bit about the process of working with Code for America, which has been an absolute pleasure. Just today, Jon described it as “the closest thing to a dream project.”

I concur. Not only did the client communication work out really well, but their willingness to share the pattern library we put together warms the cockles of my heart.

When Clearleft’s part in the project officially wrapped up, I wrote:

It’ll be a while yet before the new site rolls out.

That was exactly one month ago.

The new Code for America website went live last Friday.

I’m impressed! That’s a pretty short timescale to rebuild a fairly large website, not only changing the front-end codebase, but also switching out the back-end stack as well. They must’ve been working flat out.

I’ve worked on projects in the past where my initial excitement at the project’s wrap diminished as the site launch date slipped further and further over the horizon of the future. It isn’t unusual to have a gap of many months between the end of Clearleft’s time on a project and seeing the site go live. I’m really happy that the Code for America project bucks that trend.

Sunday, February 16th, 2014

Climbing Mount Responsive

I’m back from Munich, where I spent three solid days workshopping with AutoScout24. I’m happy to report that it went really, really well. It’s restored my confidence after the negative feedback I got in Tel Aviv.

Three days is quite a long time to spend workshopping, so I was mostly winging it. But that extended period also allowed us to dive deep into specific issues and questions (all the usual suspects: how to handle navigation, images, complex interactions, etc.).

The real issues, however, were much more “bigger picture”—how to handle the transition to responsive of a big desktop-centric site that’s been growing for over a decade. By the end of the three days, we had divided the options into three groups:

  1. Start making any new pages and sections of the site responsive. After a while of doing that, the team would develop a pretty good feeling of what it would take to then go back and retrofit what’s already online. The downside of this approach is that would provide an asynchronous user experience: users would be moving from responsive to non-responsive parts of the site, which could be confusing.
  2. Leave the current fixed-width grid as it is, but focus on making all the components of the page flexible. Once all the components are fluid, then it should be a matter of switching over to a fluid grid in one fell swoop. On the plus side, this means that the whole site would then be responsive. On the negative side, until all the components have been made flexible—which could take some time—the site remains rigidly fixed-width and desktop-centric.
  3. Rebuild the mobile site, using it as a seed from which to grow a new responsive site. On the face of it, having a separate mobile subdomain might seem like a millstone around your neck if your trying to push for a responsive design. In practice though, it can be enormously useful. Mostly it’s a political issue: whereas ripping out the desktop site and starting from scratch is a huge task that would require everyone’s buy-in, nobody gives a shit about the mobile subdomain. Both the BBC news team and The Guardian are having great success with this approach, building mobile-first responsive sites bit-by-bit on the m. subdomain, with the plan to one day flip the switch and make the subdomain the main site. The downside is that until the switch is flipped, you’ve still got to deal with redirecting mobile traffic—probably using some nasty user-agent sniffing—and all the issues that come with having your content appearing at more than one URL.

There’s no doubt about it: trying to apply responsive design to large-scale existing desktop-centric sites is really, really hard. The message I keep repeating in my workshops is that you can’t expect to just sprinkle on some magic media-query fairydust—it just doesn’t work that way. Instead, you’ve got to figure out a way to reframe all your challenges into a mobile-first way of thinking.

Instead of asking “How can I make these patterns (mega-menus, lightboxes, complex data tables) work when the screen size shrinks?”, you need to ask “What’s the problem they’re supposed to be solving, and how would I design a solution for the small screen to start with?” Once you’ve done that, then it becomes a matter of scaling up to the large screen …which is actually a much simpler problem space.

As is so often the case with web design, it requires the application of progressive enhancement. In the case of responsive design, that means starting with small-screen styles, small-screen images, and small-screen content priority. Then you can progressively enhance with layout styles, larger images, and conditional loading of nice-to-have extra content. Oh, and you absolutely have to accept and embrace the fact that websites do not need to look the same in every browser.

Making that change in thinking can be hugely challenging.

Remember when we were all making websites with tables for layout? Then the web standards movement came along, pushing for the separation of structure and presenation, urging us to use CSS for layout. It took the brain-rewiring power of the CSS Zen Garden to really give people that “A-ha!” moment.

Mobile-first responsive design requires a similar rewiring of the brain. And if you’re used to doing things a certain way, then it’s natural to resist such drastic change—although as Elliot pointed out at the Responsive Day Out, when you first make the switch it might be very tricky, but it gets easier and easier with each project.

Still, it can be a difficult message to hear. I suspect that’s why my workshop in Tel Aviv wasn’t so warmly received—I didn’t provide any easy answers.

The designers and developers at AutoScout24 also didn’t find it easy to accept how much they’d have to rethink their approach, but by the end of the three days they had a much clearer idea of how they could go about making that change. I’m really curious to see where they’ll go from here. Personally, I’m very optimistic about their prospects for successfully pulling off a large-scale responsive relaunch.

There are two main reasons for my optimism:

  1. They’ve already put together a front-end styleguide; a UI library of components. The fact that they’re already thinking about breaking things down into their component parts is a terrific approach (and they also said they’re planning to make their UI library public, which makes me very happy indeed).
  2. Developers, designers, and information architects work side by side. The web department works in teams, but those teams aren’t organised by job role. Instead each small team of 4 or 5 people has a product manager, a UX designer, visual designer, and a developer or two.

I can’t emphasise enough how important that kind of collaborative environment is.

I’ve said it before, and I’ll say it again; the biggest challenges of responsive design are not technology problems.:

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

I’ve spoken to some companies who were eager to make the switch to responsive design, but who have designers and developers sitting in different rooms, or on different floors, or buildings, or even countries. That’s when my heart sinks. Trying to work in the iterative way that a good responsive project demands is going to be massively difficult—if not downright impossible—in that environment.

So I’m pretty confident that if the designers and developers at AutoScout24 put their minds to it, they can rise to the enormous challenge that lies ahead of them. They’ve got the right working environment, they’ve got a UI library, and they’ve got the option of using their exising mobile subdomain. Most of all, they’ve demonstrated a willingness to accept all the challenges that come with changing from a desktop-centric to a content-first mindset.

All in all, it was a very productive three days in Munich. It was hard work, but then again, I had the option of rewarding myself with some excellent Bavarian food and beer each evening.

Abendessen

Monday, February 10th, 2014

Workshoppers of the world

Three weeks ago, myself and Jessica went to Israel. It was a wonderful trip, filled with wonderful food. I took lots of pictures if you don’t believe me.

But it wasn’t a holiday. Before I could go off exploring Tel Aviv and Jerusalem, I had work to do. I was one of the speakers at the UXI Studio event.

We arrived on Saturday, and I was giving a talk on Sunday evening. At first I thought that was a strange time for a series of talks, ‘till I realised that of course Sunday is just a regular work day like any other.

It was a lot of fun. I was the last of four speakers—all of whom were great—and the audience of about 200 people were really receptive and encouraging. I had fun. They had fun. Everything was just dandy.

Two days later, I gave a full-day workshop on responsive design. That was less than dandy.

I’ve run workshops like this quite a few times, and it has always gone really well, with lots of great discussions and reactions from attendees. The workshop is normally run with anywhere between ten and thirty people. This time, though, there were about 100 people.

Now, I knew in advance that there would be this many people, so I knew I wouldn’t be able to get as hands-on as I’d normally do; going from group to group, chatting and offering advice—it would simply take too long to that. So I still ran the exercises I’d normally do, but there was a lot more of me talking and answering questions.

I thought that was working out quite well—there were plenty of questions, and I was more than happy to answer as many as I could. In retrospect though, it may have been the same few people asking multiple questions. That might not have been the best experience for the people staying quiet.

Sure enough, when the feedback surveys came back, there were some scathing remarks. Now, to be fair, only 31 of the 100 attendees filled out the feedback form at all, and of those, 15 left specific remarks, some of which were quite positive. So I could theoretically reassure myself that only 10% of the attendees had a bad time, but I’m going to assume it was a fairly representative sample.

I could try to blame the failure of my workshop on the sheer size of it, but I did a variation of the same workshop for about the same number of attendees at UX Week last year, and that went pretty great. So I’m not sure exactly what went wrong this time. Maybe I wasn’t communicating as clearly as I hoped, or maybe the attendees had very different expectations about what the workshop would be about. Or maybe it just works better as a half-day workshop (like at UX Week and UX London) than a full day.

Anyway, I’m going to take it as a learning experience. I think from now on, I’m going to keep workshop numbers to a managable level: I think around thirty attendees is a reasonable limit.

I’m about to head over to Munich for three solid days of workshopping and front-end consulting at a fairly large company. Initially there was talk of having about 100 people at the workshop, but given my recent experience in Tel Aviv, I baulked at that. So instead, the compromise we reached was that I’d give a talk to 100 people tomorrow morning, but that the afternoon’s workshop would be limited to about 30 people. Then there’ll be two days of hacking with an even smaller number.

This won’t be the first time I’ve done three solid days of intensive workshopping, but last time I was doing it together with Aaron:

I’m in Madison, Wisconsin where myself and Aaron are wrapping up three days of workshopping with Shopbop. It’s all going swimmingly.

This last of the three days is being spent sketching, planning and hacking some stuff together based on all the things that Aaron and I have been talking about for the first two days: progressive enhancement, responsive design, HTML5, JavaScript, ARIA …all the good stuff that Aaron packed into Adaptive Web Design.

This time I’m on my own. Wish me luck!

Sunday, February 9th, 2014

ABC

When I finally unveiled the redesigned and overhauled version of The Session at the end of 2012, it was the culmination of a lot of late nights and weekends. It was also a really great learning experience, one that I subsequently drew on to inform my An Event Apart presentation, The Long Web.

As part of that presentation, I give a little backstory on the ABC format. It’s a way of notating music using nothing more than ASCII text. It begins with some JSON-like metadata about the tune—its title, time signature, and key—followed by the notes of the tune itself—uppercase and lowercase letters denote different octaves, and numbers can be used to denote length:

X: 1
T: Cooley's
R: reel
M: 4/4
L: 1/8
K: Edor
|:D2|EBBA B2 EB|B2 AB dBAG|FDAD BDAD|FDAD dAFD|
EBBA B2 EB|B2 AB defg|afec dBAF|DEFD E2:|
|:gf|eB B2 efge|eB B2 gedB|A2 FA DAFA|A2 FA defg|
eB B2 eBgB|eB B2 defg|afec dBAF|DEFD E2:|

On The Session, a little bit of progressive enhancement produces a nice crisp SVG version of the sheet music at the user’s request (the non-JavaScript fallback is a server-rendered bitmap of the sheet music).

ABC notation dates back to the early nineties, a time of very limited bandwidth. Exchanging audio files or even images would have been prohibitively expensive. Having software installed on your machine that could convert ABC into sheet music or audio meant that people could share and exchange tunes through email, BBS, or even the then-fledgling World Wide Web.

In today’s world of relatively fast connections, ABC’s usefulness might seemed lessened. But in fact, it’s just as popular as it ever was. People have become used to writing (and even sight-reading) the format, and it has all the resilience that comes with being a text format; easily editable, and human-readable. It’s still the format that people use to submit new tune settings to The Session.

A little while back, I came upon another advantage of the ABC format, one that I had never previously thought of…

The Session has a wide range of users, of all ages, from all over the world, from all walks of life, using all sorts of browsers. I do my best to make sure that the site works for just about any kind of user-agent (while still providing plenty of enhancements for the most modern browsers). That includes screen readers. Some active members of The Session happen to be blind.

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

There’s one little thing that slightly irks me though. The ABC notation should be read out one letter at a time. But screen readers use a kind of fuzzy logic to figure out whether a set of characters should be spoken as a word:

Screen readers try to pronounce acronyms and nonsensical words if they have sufficient vowels/consonants to be pronounceable; otherwise, they spell out the letters. For example, NASA is pronounced as a word, whereas NSF is pronounced as “N. S. F.” The acronym URL is pronounced “earl,” even though most humans say “U. R. L.” The acronym SQL is not pronounced “sequel” by screen readers even though some humans pronounce it that way; screen readers say “S. Q. L.”

It’s not a big deal, and the screen reader user can explicitly request that a word be spoken letter by letter:

Screen reader users can pause if they didn’t understand a word, and go back to listen to it; they can even have the screen reader read words letter by letter. When reading words letter by letter, JAWS distinguishes between upper case and lower case letters by shouting/emphasizing the upper case letters.

But still …I wish there were some way that I could mark up the ABC notation so that a screen reader would know that it should be read letter by letter. I’ve looked into using abbr, but that offers no guarantees: if the string looks like a word, it will still be spoken as a word. It doesn’t look there’s any ARIA settings for this use-case either.

So if any accessibility experts out there know of something I’m missing, please let me know.

Update: I’ve added an aural CSS declaration of speak: spell-out (thanks to Martijn van der Ven for the tip), although I think the browser support is still pretty non-existent. Any other ideas?