Archive: April 25th, 2008

Accessibility 2.0

Julie Howell will be moderating this panel discussion. Before that she has a few words to say. She sees Web 2.0 as an opportunity. Everyone is saying that social network sites are going to get more “vertical” and be based around niche interests — well, the accessibility community has been dealing with niche interest groups for years.

Here’s a big news announcement: PAS 78 is being turned into a British Standard. Consultation on the draft will take place around September.

Julie once again reiterates that Web content guidelines are becoming less important and authoring tool guidelines are more relevant.

Most of all, accessibility is not an old-school attitude. We should be providing rich user experiences to everyone.

Now to introduce the panelists: Mike Davies, Kath Moonan, Bim Egan, Jonathan Hassell, Antonia Hyde and Panayiotis Zaphiris.

Julie starts by asking Bim if Web accesibility is getting better or worse. Better! Bim is quite adamant. If you take a step backward, you’ll see that we’ve come on in leaps and beyonds. In fact, sometimes it goes too far: square wheel building. That’s when developers get over-zealous about accessibility and add “features” that do more harm than good.

Julie asks Mike why someone from Yahoo is at this conference. Mike says that it’s because they listen to their users although the reason why he’s here is probably because of his work at Legal and General where he demonstrated the business benefits of accessibility. Yahoo put together a kick-ass team, thanks to Murray. That’s why Mike is at Yahoo. Mike says the accessibility success stories are down to one person with power taking a stand rather than evangelists in the trenches.

Julie asks Antonia how we get people to think more about learning disabilities. Antonia says it’s about getting everyone together to collaborate and learn.

Staying on the subject of learning disabilities, Julie asks Jonathan about the reluctance of people to participate for fear of being seen as different. That can be a real problem online with its anonymous culture of flaming. The viewpoint of disabled people needs to be represented more.

Julie talks about Second Life. On the internet, we can theoretically transcend disabilities from meatspace. Jonathan says that escapism has its place but he wouldn’t want anybody to lose their appreciation of their identity. In a room of deaf people, not being able to communicate in BSL is a disability.

Julie wants to know what Kath thinks of PAS 78. Before getting on to that though, Kath would like to slam Yahoo for having inaccessible gateways into various services. Sorry Mike, sorry Christian, but Kath is handing you a plate with your ass on it. Kath then goes into a long rant and ends by pointing out that when she is testing accessibility she finds that what she’s really doing is testing usability. So to answer the question, it’s a lovely document. Seriously though, she says we need to be more agile with accessibility and PAS78 looks like it will be a standard before WCAG 2.0 is out. That’s a good pace.

Panayiotis is asked about the future and how the elderly population will cross over with disabled users. Before answering that question, he’d like to take up Kath’s point about being more agile: it’s important that we keep up. Back to the question. What design issues crop up for elderly users? Eye-tracking shows a lot of crossover with how dyslexic users scan pages. A more important question is how we get older users to engage with new technologies like social networking sites.

Julie points out that there is a wide range of cognitive disabilities; tiredness, short-term memory. Panayiotis says that navigation is a key problem. Presenting a user with a list of choices can be disorienting. We need to understand how cognition works to make navigation usable.

Julie quotes Joe: We live in a post-guideline era. We’ve heard that message again and again today.

Time for questions. First question: JAWS is ruddy expensive; are there any alternatives or ways of getting around the cost issue?

Kath responds that running a website through a screen reader like any other browser is not really testing it. You don’t get the user experience that a real user brings.

The guy who asked the question isn’t really getting that point. He wants JAWS to be free “like any other browser.”

Jonathan gives Thunder, the new free screen reader, a plug but warns that it’s not great with JavaScript. As for a free version of JAWS from Freedom Scientific, don’t hold your breath.

Bim agrees that JAWS is expensive but points out that you can get a free version that will work in perpetuity for 40 minutes at a time.

Mike jumps in to emphasize the point: web developers should not use screen readers. You will just get distracted. Instead, stop and think about how people use the Web. Once you understand the barriers they might hit, you can start coming up with solutions. Nothing will help you test your website like having real users test it.

Kath tells a story from Legal and General where sIFR was used and tested with a screen reader by a developer. It worked well enough for the developer but actual screen reader users were being put off by finding these Flash movies in the page.

Another member of the audience echoes the advice: don’t test with a screen reader; get a screen reader user to test. He goes on to say that accessibility is a quality control issue. He reiterates the point that was made many times today: accessibility is a user experience issue and user testing is accessibility testing.

Next question. On the subject of square wheels, there are techniques and tips that are supposed to be helpful but actually are harmful. He mentions the abbr pattern and talks about what he “learned” from Steve Faulkner today… in other words, he’s swallowing the FUD. So where do we go to know what is and isn’t best practice?

Mike gives his site a plug. Mostly though, he says that it’s important to read stuff and think about it instead of just taking solutions at face value. Mike claims that the microformats community did things without thinking them through. What bollocks! He’s claiming that people should test things but he’s just repeating the FUD about the abbr pattern which isn’t based on real-world testing…

Ian jumps in to defend microformats because of the data portability they allow. He mentions hCard. Mike rebuffs him.

Right, that’s it… I can’t take any more. Give me that mic please, Julie. I repeat my points about:

  1. Be specific! You have an issue with the abbr pattern not with microformats in general, and certainly not with hCard!
  2. Do some real-world testing with real screen reader users. Practice what you preach. The BBC did this by getting Robin in (and he found no accessibility problems with hCalendar).
  3. Who are you to decide what is and isn’t human-readable? I find 2001-02-03 to be more understable and internationalised than 01/02/03 or 02/01/03 or 01/03/02 and I’m a human, not a machine.

Phew! I manage to get the last word in before the man from Mozilla gives a little spiel about Firefox.

Now everyone is being thanked for participating. I’m given a bottle of something lovely and bubbly as a token of appreciation. Awww!

Thanks to Robin, Kath, Gwen and everyone who put the day together.

Update Mike has responded on his blog about the abbr pattern discussion. There are a few errors in there:

  1. The title of Mike’s blog post refers to a non-existent datetime microformat. Fixed. The hCalendar microformat uses a combination of two design patterns: the abbr pattern and the datetime pattern.
  2. Mike says that I fail to realise is that Steve Faulkner has a long track-record of basing his findings on thorough screen reader testing. This is not true. I am well aware of the sterling work done by Steve Faulkner and Gez Lemon. I would describe Steve as rel="muse".
  3. MIke incorrectly states the microformats community accepted the justification ‘Most screen reader users do not change their default settings for abbreviations and by default, abbreviations are not expanded’. The microformats community has done no such thing. The community is working towards alternatives for the abbr design pattern which has acknowledged problems… but those problems are founded in the semantics of the pattern, not the accessibility of it.
  4. Mike claims that I am unaware of the screen reader testing he has done on the abbr pattern. That isn’t true. I am not only aware of the testing, I am very grateful for it. My point was that throughout the day, we heard again and again that nothing beats testing real sites with real people. As much as I appreciate the great work of people like Stev, Gez and Mike, there needs to be some acknowledgement that there’s between testing a test case and testing a real document.
  5. For the record, when I pointed out that most screen reader users don’t change their default settings, I wasn’t suggesting that therefore the accessibility concerns are unfounded… but I do think that they are often exaggerated.

Just in case this hasn’t been made clear enough, let me reiterate:

  • The microformats community is very grateful for the continued collaboration of accessibility experts like Mike and Steve. What got me upset — and this is the kind of Redectio ad Absurdum that I was railing against in my keynote — was hearing how concerns about one part of one microformat so quickly got turned into microformats are inaccessible. We’ve spent years fighting blanket statements like JavaScript is inaccessible or Flash is inaccessible.
  • The microformats community is actively working towards alternatives to the abbr design pattern for including datetime information. I know that it might not seem that way from the outside—we need to do better at communicating activity and progress.

Tools and Technologies to Watch and Avoid

Ian is here to praise and to shame Web technologies. He begins with Ashley Highfield: definitely to be shamed, not praised. He just didn’t get the participation culture.

We need to stop talking about just providing content. Accessibility is not a one-way street. Content will get re-used and remixed. Those possibilities should be there for everyone.

Question for the audience: who considers themselves to be not disabled? Hands go up. Ian asks one of those people to read what’s written on a bottle on the stage. The person can’t read it. We all have differing levels of ability.

We can’t design for everyone—just live with it. The unpredictability of the Web is there by design.

Another show of hands: who knows Quechup? I put my hand up. They are infamous for pulling a Plaxo: spamming your address book.

Ooh, I think Ian is leading up to password anti-pattern. He’s talking about phishing. We can judge the trustworthiness of a site partly on how dodgy or professional it looks. How does a screen reader user judge whether or not they might be being phished. OpenID is great but the redirection is a problem for assistive technology and for most phones.

Back to the friends list importing issues. Ian is showing the Dopplr example. Moving on from social network portability, what about data portability? Some photo sites make it hard to get your data out, some make it very easy.

Licenses are a stumbling block. They are rarely written in plain English. How many people know that Facebook owns everything you post there? In contrast, Creative Commons provide short, long and machine-readable versions of their licenses. Also, the iconography of the symbols helps (semiotics again).

Flash video is problematic for editing. Some sites allow you to caption Flash video but the captions only exist within the Flash silo (sounds like a specific implementation rather than a fundamental problem with the technology to me).

Now Ian is showing Natalie’s geek venues site to demonstrate how Google Maps allows you to export location data.

Adobe AIR. It’s 1997 all over again. Right now there isn’t much accessibility in there. Yes, it’s in beta but accessibility should not be an afterthought.

Joost, by contrast, uses SVG, JavaScript and XUL to make something that most people assume is Flash at first glance. It works like Flash but the data is more accessible and exportable.

Ian has a lot more to show us (he has 80 slides in total!) but his time is up so he’s being kicked off stage to make way for the closing panel. But there’s time for one quick question. Christian asks what would be the open-source equivalent of AIR? XUL says Ian. Christian says that AIR is built on HTML, CSS and JavaScript so once the player gets keyboard access it will be quite accessible. Ian responds that he looked on the Adobe site for accessibility info on AIR and the fact that he found nothing scared him. Niqui says that Silverlight — Microsoft’s non-competitor to Flash that looks a lot like it’s competing with Flash — is the same: it’s at version 1.0 and accessibility is still not on the table.

A case study: Building a social network for disabled users

Stephen Eisden is going to give us a case study by showing us what went into building a social network aimed specifically at disabled users, the Disability Information Portal. The idea was to create a one-stop-shop for anyone with an interest in accessibility. The aim was to combine accessible design with Web 2.0 functionality.

They didn’t want to get locked into one vendor like Microsoft but they also didn’t want to get locked into one API provider either. They settled on Wordpress for the underlying technology.

The accessibility of the site must extend beyond simply visual impairment. It had to work for people with learning disabilities too. They also needed to balance creativity with control to create a site design that was flexible and customisable. Simple consistent iconography was also important. (This is something that Antonia mentioned as well. Semiotics is clearly an important topic.)

The site uses tag clouds. It was a challenge to make them accessible. They included the usage number with the tag. To avoid jargon, it isn’t called a tag cloud. The site uses ratings too: a combination of stars used as labels for radio buttons.

Before starting, they looked at what was already out there and identified a gap in the market. They used focus groups. People wanted access to information about accessibility facilities, particularly at a local level. And where facilities didn’t exist, people wanted to know what they could do about it.

They had an accessibility audit and they also did user testing. Right now the project is in a pilot stage (a nicer way of saying beta) and they’re doing more accessibility testing. They’ve learned that testing needs to be an ongoing process. Also, testing isn’t something to be afraid of: it often highlights opportunities for improvement.

They also learned that you need to have a flexible approach to design rather than a rigid, fixed attitude. Also, simplicity is key. Focus is important; do less but do it well rather than trying to do everything. Finally, they learned that accessibility makes the site more usable and removes barriers for everybody.

The next step will be the public launch of the site. www.dip-online.org

User-generated Content

Jonathan Hassell is going to talk about user-generated content. He’s from the BBC. Aren’t we all, dear, aren’t we all? Specifically he’s with the User Experience and Design department.

Here’s a stroll down memory lane with a brief history of accessibility at the Beeb. They spent a lot of time making television and radio accessible as well as creating content for specific audiences. Online, they had Betsy. It’s almost ten years old now. In 2002 they really started getting into web accessibility. My Web, My Way is their effort to improve the Web, not just their own site. Now they’ve just had the homepage relaunch which uses JavaScript best practices.

Now onto user-generated content. He reiterates what I was saying about the importance of open content. Content must be accessible even before you put an interface on it. All the interface layers need to work together; web page, web browser and operating system. But we’re here to talk about content, not interface.

Blogs, Bebo and YouTube contain user-generated content but even basic accessibility hooks like alt text is missing. Whose job is it? There are two things: the tools and the site. Again, he reiterates a point from my keynote: ATAG is more relevant than WCAG. Yes, it is up to the site owners to provide the ability to make accessible content. But is it their responsibility to actually add the accessibility hooks to the user’s content? The DDA is very unclear on this point. It’s like the argument about whether ISPs are responsible for customers accessing illegal content.

Jonathan is posing a lot of questions here today. He wants to know if disabled users will be left behind by Web 2.0. Talking about people with literacy difficulties, he points to the lack of spell-checking in textareas on social networking sites like Facebook (hmmm… I think this an OS issue myself).

Here’s an interesting twist: BSL users are putting videos on YouTube. Who provides the text transcription or voiceover?

Jonathan thinks that the Assistive Technology chain has broken down. Modern AT can’t handle non-text content like video and games well.

But it isn’t all bad news. Remember, the opportunities offered by rich media like video is a boon to people with learning disabilities. And video offers BSL users the opportunity to get their language out there on the Web for the first time.

Let’s ditch the phrase “it isn’t accessible.” Nothing is accessible to everyone. Instead, let’s say “it isn’t usable by someone with this particular disability.”

It’s hard enough for organisations to provide transcripts and captioning; what about when it’s user-generated content? You can engage the community but even then, it will always be behind the original rich media.

Now Jonathan steps beyond inclusion and looks to the future. He shows some games that have been created for deaf children. He demos a game that is accessible to children learning BSL. You construct sentences with nouns, verbs and tenses; then click a button to see that sentence signed by a cartoon character. (This is pretty cool. Frankly, I could imagine using this myself just for the fun of it.)

One last demo. It’s a science game for blind children who have never used a computer. The game must explain the grammar of 30 years of computer games while explaining scientific concepts like force and inertia. The visual elements exist purely for anyone accompanying the blind user. This is a fully-fledged game with mechanics, physics and feedback… all using stereo sound. Tones, words and direction are used to create an interactive environment. Done well, sound can be layered to provide a lot of information. Just imagine how this could be applied to virtual worlds like Second Life.

That was an inspiring way to end!

Rich Media and Web applications for people with learning disabilities

Antonia Hyde is going to offer a high-risk presentation—it will contain a lot of rich media.

Here’s a sound clip. “There’s a lot of information there… that’s a lot of information.” David, 26, Man United supporter.

Much of this presentation will be obvious: that’s the point.

Stats: 1.5 million people in the UK with learning disabilities. 1 in 3 say they have no contact with friends. They are ghettoised. United Response support 1500 people, people who don’t necessarily communicate verbally. Antonia works on the Web side of things.

Here’s a video of Micheal using a Dynavox, a speech synthesizer, to communicate. Rich media like video can be a great help in “getting” something. Here’s a video of Mandy who loves using the internet. She watches videos on YouTube to learn about off-roading with Land Rovers; she loves Land Rovers. The videos help her calm down. When asked if she’d like to meet other people who like using Land Rovers on the internet, she says she very much would.

People with learning disabilities are contributing content online but they tend to be niche websites rather than the mainstream. The videos being shown today are of people with mild learning disabilities.

Here’s a video of David using a site (we don’t see which site). A video starting unexpectedly gives him an unpleasant surprise. He would rather decide when a video starts and stops. He’d also like better information management; less cramming. He would like options to change how the page looks. Only with guidance does he find a colour-changing widget. Unexpected pop-up (or faux pop-up) boxes confuse David.

Antonia is having some technical issues getting back from the video to her slides. Yes, it is a Windows machine actually.

Back to the lessons we’ve learned from watching David. Rich media sites could really help people with learning disabilities. They should be able to use social networking sites and contribute content instead of just receiving it.

Intersting factoid: Comic Sans is well regarded by people with learning disabilities.

Embedded rich media players in web pages aren’t standardised enough yet. Controls need to be in a logical order. Buttons need to be big enough.

Ordering information well around the embedded media is important. Nice big graphics also help. Use them as part of a visual vocabulary. Audio is currently the poor relative of video.

Use terminology that explains the functionality rather then the technology.

Here’s David using Last.fm. He searches for his favourite rock group, 30 Seconds to Mars. He clicks on the music. He thinks that the user avatars on the right-hand side are advertising or cartoons. When prompted, he clicks on an avater and is taken to that user’s page. Now he starts to recognize that the avatars are users of the site but he doesn’t realize that the label “friends” means “friends of the user whose page you are viewing.” He wants to know who these people are: he doesn’t realise that the usernames are people’s names. But he likes Last.fm: it looks like a great way to make friends with people who like the same music.

That was quite a vivid example of the connective power of the Web.

Fencing in the Habitat

Christian is up now with a talk called Fencing in the Habitat — How to do the right thing and get it wrong. He reckons that some of the things he has to say will annoy some people here. He thinks that we are selling accessibility the wrong way. Christian is assuming that most people here want to make accessible products so he doesn’t have to convince anyone here of the benefits.

Genuinely usable and accessible sites and products are very rare says Christian. This is a problem because why should people care about making things accessible when most others (including the gig guns) don’t bother. People only grudgingly embrace the need for accessibility. For designers, this taps into a subconscious fear of losing the ability to see.

Then there’s the numbers game: when you are asked to supply numbers about how many disabled users are using your product. It misses the point; these are human beings. Anwyay, statistics lie.

Here’s a recurring problem. You have an old, broken, unloved product. Some third-party expert comes in and scatters magic accessibility pixie dust and everything is hunky-dory. It just doesn’t work that way. You might have to tell them what they don’t want to hear. You might have to tell them that starting from scratch is the best option. Also, there’s no point telling the people with the money about the technical things like links and forms.

Now Christian dives into a tangent about how people read on the Web. He’s over-generalising by saying that people don’t pay attention to tone and nuance on the Web.

Anyway, we’re the do-gooders, the hippies, the tree-huggers and we’ve got to sell accessibility to the suits. A lot of the time we wrongly characterize accessibility as making a habitat for disabled users. Instead of ghettoising disabled users, we should take them along with us. Accessibility is really little more than a good tough usability test.

Now there’s the issue of universality—providing access regardless of technical environment. That doesn’t mean every browser gets CSS and JavaScript. Graded browser support is the way to go.

Here’s a harsh truth: we will not be able to cater to everybody. Different disabilities have different needs and sometimes they clash.

A lot of the time we do little things supposedly for the sake of accessibility but they’re really there to salve our conscience. Font-resizing widgets (especially ones that have tiny low-contrast buttons) are a good case in point. Either use a readable font size from the start or explain to people how to resize text in their browser. Skip links are another example. These are genuinely useful and not just for screen readers; they’re handy for mobile too. But people go too far and put tons of skip links all over the place.

It’s not about gadgets. These things are mostly about making us feel good and they cost time that could be better spent. They are a quick fix that stagnate over time.

Here’s an example of a bad interface from the real world. Braille buttons in an elevator but the braile buttons don’t do anything; they are next to the real buttons. But if you think about it, the reasoning is that they don’t want people to accidently press the button when they’re just trying to read the button.

Another great example: a wheelchair-accessible toilet …where the toilet roll is five feet away.

A speaker, like the ones with your hi-fi, is a piece of assistive technology. It was created to help someone who was hard of hearing. Now it makes people hard of hearing.

The main problem with accessibility is that people don’t see the need. The driving force for things like semantic markup and progressive enhancement is (drumroll please) geeks that care. Hug your developers. They are the people who do the extra work that makes the Web better for all.

The best way to sell accessibility is to use the old search engine optimisation trick. Page titles, for example, are really important for both accessibility and search engine optimisation and yet people get it wrong so often. Titles show up everywhere: in the browser bar, in bookmarks, in seach results.

Stop saying “alt tag”. Yeah! It’s an attribute. More importantly, it’s alternative text.

The bogus accessibility software sellers are harmful. Bobby is dead, hurrah!

Sell accessibility by using technology hypes: mobile devices are a great convincer.

Simplifying the interface for users spells success. Just look at the games world. Microsoft and Sony battle it out with features, then the Wii comes along and blows them out of the water. That’s because it’s easy to use. This is how we should approach accessibility.

Is Web 2.0 bad for accessibility? Well, define Web 2.0. For Christian it’s a methodology, a read/write mindset. That’s a good thing. Web 2.0 can be great for accessibility because users can take up the challenge of annotating and transcribing content. Slideshare, for example, will take your slides and convert it to Flash but it will also convert it to HTML. They will provide an API so that you can create accessible versions of your content.

What about video? Viddler allows users to tag the actual videos, not just comment on them. The quality of the discourse is far better than a site like YouTube.

JavaScript and Flash used in the right manner can actually increase accessibility. He plugs . Yay!

Christian demos his YouTube captioner, a nice piece of work.

Here’s Twitter again. Christian has used Google’s translation API to take Twitter’s RSS and detect the language of the messages. Then he inserts the appropriate lang value. icanhaz.com/twitterwithlang (This is great: it ties in with what I was saying in my keynote about looking beyond the website and treating RSS and APIs as accessibility features. It also contrasts nicely with Steve’s talk. Instead of just pointing out the problems with the Twitter site, Christian built a working solution that lives at a different URL.)

Flickr gets a hard time for its inaccessible interactions. But the edit-in-place functionality allows far more people to edit titles and descriptions. So that’s an accessibility feature really.

To summarise: don’t fence in disabled users in a habitat. Instead let everybody benefit from a more usable product and a better experience.

Now it’s question time and someone is taking issue with Christian’s claim that text widgets are not a good solution. Christian responds: if you build a widget for every possible disability, you still don’t please everyone. Also, the widgets are a sign of a deeper problem. A lot of the time the deeper problem lies with the site but it can also be a problem with the browser or the operating system. Why should web developers be responsible for patching those flaws?

A comment from Kath: we shouldn’t be arguing about who’s responsible. It’s like when someone farts and everyone looks at the dog or at the other people in the room, wondering who is to blame when, really, we should be complaining about the smell.

Making Twitter Tweet

I’m at a cosy little conference in London with the grandious title of Accessibility 2.0: A Million Flowers Bloom. I’ve just finished delivering the opening keynote which I tried to make as pretentious as the conference title.

Now the mighty Steve Faulkner is up to deliver a talk called Making Twitter Tweet. He’s being introduced by Robin Christopherson and is getting a round of applause for being the guy who created the Web Accessibility Toolbar. Steve begins by clarifying that the Web Accessibility Toolbar was very much a team effort.

Steve will be talking about Ajax, using Twitter as a case study. But this is not an exercise in Twitter bashing. Also, remember that accessibility is not just about blind people even though that’s what Steve is focusing on today. By the way, he’s stevefaulkner on Twitter.

There are some issues with bad or no alt attributes but that’s not we’ll be looking at today. There’s already plenty of information about that kind of stuff out there. The more pressing issue is that sites like Twitter that use JavaScript and Ajax can be used by people with disabilities.

Ooh, he’s going to talk about the use of the abbr element in microformats. He’s talking FUD about human vs. machine readable data and I’m rolling my eyes. He’s showing suggested alternatives (which have equal misuse of the title attribute). Now he’s saying that the microformats community don’t seem to want to take them on board. He’s talking complete bollocks in other words. Firstly, he’s damning microformats when in fact he has an issue with one part of one microformat (the abbr pattern in hCalendar). Secondly, there is a heck of a lot of discussion and testing work going on in the mailing lists and on the wiki with WaSP collaboration. By the way, Robin — who is sitting two seats away from me — was recently brought in to test BBC listings which had been marked up with hCalendar. He described the feared accessibility problems as unfounded. Most screen reader users do not change their default settings for abbreviations and by default, abbreviations are not expanded. Besides, an internationalised way of writing a date is not just machine-readable data (I’m a human and I can read 2008-04-25 just fine). I’m not saying that the abbr pattern doesn’t have problems (it does but they are semantic in nature) but Steve is mischaracterising the current situation.

Anyway, back to Twitter. There are links (like the favouriting star) that should really be buttons. This is something I touched on briefly in my keynote: know when to use a link and when to use a form element. Steve suggests an input type="image". Really, it’s a button with two states but HTML doesn’t really provide an element for that says Steve.

On to Ajax. There are two issues:

  1. Users having access to changed content.
  2. Users knowing that content has changed.

First there’s an explanation of how the virtual buffer in Jaws work. Updates to the buffer occur approximately 600 milliseconds after a control is pressed. So screenreaders don’t react to changes in the content, they react to user interaction. With the Twitter favouriting functionality, the time between pushing the button and the content being changed is the problem. Fire up Firebug to see how long the Ajax request takes. It’s about a second. That’s less than 600 milliseconds. We have a problem.

So, to start with, the browser view and the screen reader view is synchronised; the pseudo-button is off. The user clicks. The browser view is updated to see the selected state. The screen reader view still sees the old view. This isn’t just an Ajax issue but Ajax magnifies the problem because of the round-trip to the server.

Good news! JAWS 7.1 has effectively solved this issue. It doesn’t listen for user actions, it responds to changes in the DOM. But Window Eyes still has this problem.

A solution is to inform the user that content has changed. ARIA live regions will have this but we don’t have that yet. In the meantime you can try some other tricks. You could provide text hidden off screen that tells the user that “content updates occur frequently—if things aren’t working as you would expect, try refreshing the page” (triggering a re-read).

The second issue is letting users know that some content on a page has changed. Twitter provides the character countdown but users have to exit out of forms mode to get it. There are three possible solutions:

  1. Use alert boxes.
  2. Use audio cues. At set intervals of the character limit (50,30,10 etc.), announce the number. This takes a lot of work. It uses a mixture of JavaScript of Flash.
  3. Use WAI-ARIA live regions but that’s not yet supported.

Let’s move beyond Twitter and look at WAI-ARIA: Web Accessibility Initiative — Accessible Rich Internet Applications. Basically it allows you to add information to elements to describe their roles and their states. Here’s the main point: It’s easy! You just add a few attributes to your existing markup. Steve gives us some example markup. It looks pretty straightforward.

Now it’s question time. I think I should resist the urge to call him on his dissing of the microformats community and let other people ask questions about genuine accessibility problems instead.

Open Data

This is the keynote presentation I gave at the Accessibility 2.0 conference held in London in April 2008.

We have come here to listen to a veritable pantheon of Web accessibility experts give us advice that is practical and relevant to working on the Web today. I’d like to offer something of an alternative. By that I don’t mean that I’m going to give you advice that is irrelevant and impractical. I mean that I’d like to take a step backwards and try to look at the bigger picture. There won’t be any code. There won’t be any hints and tips that will help you in working on the Web today. Let’s leave today behind and delve back into the past.

Let’s start with the Norman conquest of England in 1066. This was the watershed moment that split history into the Dark Ages of everything pre-Hastings and the Middle Ages of everything afterwards. Twenty years after the invasion, William the Conquerer commissioned the Domesday Book, a remarkably thorough snapshot of life in 11th Century England. This document still exists today. It rests in a glass case in the National Archives in this very city.

In the run-up to the 900th anniversary of the Domesday Book’s completion, the BBC began the Domesday Project, an ambitious attempt to create a multimedia version of the document. The medium they chose? Laserdiscs. The medium was out of date before the project was even finished. Vellum, while not the most popular storage medium these days, has proven to be far more durable.

The debacle of the Domesday Project is often cited in hand-wringing discussions around the problems of digital preservation. The laserdisc format was not durable. I don’t mean that the physical medium stopped storing its ones and zeros. I mean those ones and zeros don’t make any sense to a modern computer. To put it another way, laserdiscs are inaccessible.

Let’s go back even further to examine a medium that is even more durable than vellum. Egyptian hieroglyphic writing was often carved into stone. Symbols dating back as far as 3200 years BC have survived to this day. But for most of the past two millennia, this writing was completely inaccessible. The ones and zeros had been preserved but the key to interpreting them had not. It was only thanks to the Rosetta Stone (also on display in this very city) and the valiant efforts of Champollion that we can read and understand hieroglyphics today.

By the way — and this is a complete tangent — do you know what the great-grandson of Chamopollion does for a living? I only know this because my wife is a translator: he writes software for translators. Well, I say software …he’s actually created a plugin for Word. So his legacy might not be quite as enduring as his ancestor’s.

Word suffers from the same problem as laserdiscs. It is not a good format for digital preservation. Given time, it will become an inaccessible format.

It is my contention that what is good for digital preservation is good for accessibility.

Here’s a tired old cliché: let’s compare digital documents to buildings. This conceptual metaphor is as old as the Web itself. We talk about web “sites”: an accurate description of places that so often feel as if they are under construction.

I’d like to compare the digital and the concrete in a slightly different way. In his book How Buildings Learn, Stewart Brand explains the concept of shearing layers, a term first coined by the architect Frank Duffy and explained thusly:

“Our basic argument is that there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.”

Those layers are:

  • the site
  • the structure,
  • the skin (which is the exterior surface),
  • the services (like wiring and pipes),
  • the space plan and
  • the stuff (like chairs, tables, carpets and pictures).

Each one of these shearing layers is dependent on the layer before. The stuff depends on the space plan, the skin depends on the structure, the structure depends on the site, and so on.

Already you might be seeing parallels with Web development, especially Web development carried out according to the principle of progressive enhancement. But let’s not get ahead of ourselves here. What I’d like to point out is the different pace at which each one of these shearing layers changes.

It’s easy to rearrange furniture. It’s more troublesome to change the wiring or pipes. Making changes to the fundamental structure of a building is a real pain in the ass. The site of a building is unlikely to change at all, discounting any unforeseen tectonic activity.

If we want to preserve information, we should aim to bury it in the deepest shearing layer available. Vellum and stone have worked out well because they are the informational equivalent of a reasonably deep shearing layer. But they don’t scale very well, they aren’t easily searchable and it’s extremely time-consuming to make non-destructive copies. That’s where digital storage excels.

So how can we ensure that we choose the right formats in which to store our information? How can we tell whether a storage medium is a deep shearing layer? How can we avoid reinventing the laserdisc?

We have a few rules of thumb to help us answer those questions.

Open formats are better than closed formats. I don’t mean they are necessarily qualitatively better but from the viewpoint of digital preservation (and therefore, accessibility), over a long enough timescale they are always better.

The terms “open” and “closed” are fairly nebulous. Rather than define them too rigidly, I’d like to point to the qualities that can be described as either open or closed. The truth is that most formats contain a mixture of open and closed qualities.

First of all, there’s the development process of creating a format in the first place. On the face of it, a closed process might seem preferable. It allows greater control of how a format develops. But it turns out that this isn’t always desirable. The open-source model of development, for all its chaotic flaws, has one huge advantage: evolution. Time and time again, the open-source community has produced efficient, well-honed gems instead of the Towers of Babel that would be logically expected. That’s because Darwinian selection, be it natural or otherwise, will always produce the best adaptations for any environment. It doesn’t matter if we’re talking about ones and zeros instead of strands of DNA; the Theory of Evolution is borne out in either case. Microsoft aren’t getting their ass kicked by the Linux penguin or the burning fox of fire; Microsoft are getting their ass kicked by Charles Darwin.

Open-source development is the most obvious open quality that a format can have. Another open quality is standardization. Again, at first glance, this might seem counter-intuitive. After all, the standardization process is all about defining boundaries and setting limits as to what is and isn’t permitted. But this deliberate reigning in of the possibility space is what gives a format longevity. This will come as no surprise to the designers amongst you who are well aware that constraints breed creativity. As Douglas Adams said, we demand rigidly-defined areas of doubt and uncertainty.

As a card-carrying member of The Web Standards Project, it will probably come as no surprise that I’m rather fond of standards. But my fondness for standards extends beyond the Web. When visiting Paris with my good friend and fellow geek Brian Suda, we tried calling up the International Bureau of Weights and Measures which has its headquarters there. We wanted to see the meter. But we were rebuffed in brusque French fashion. “Zis is not a museum!”

Harrumph! Who needs the French anyway? The true father of standards is a British man, a member of The Royal Society which was based, yes, right here in this city. His name was Joseph Whitworth and he was an engineer. A developer in other words. He standardized screw threads. Before Whitworth, screws were made on a case-by-case basis, each one different from the next. That didn’t scale well for the ambitious project that Whitworth was working on. He was the chief engineer on Charles Babbage’s difference engine which, although it can’t boast a direct lineage to this computer, bears an uncanny resemblance in its internal design. I love the idea that there’s a connection between the screws that were created for the difference engine and the standards that we use to build the Web.

Standardization doesn’t necessarily lead to qualitatively better formats. Quite the opposite in fact. The standardization process, by its very nature, involves compromise. But I would rather use a compromised standardized format than a perfect proprietary one.

The Flash format, for example, while it has some open qualities remains mostly closed as long as the Flash player remains under lock and key. I’ve discussed this with my fellow Brightonian Aral Balkan who knows a thing or two about Flash. He sympathises with Adobe’s position, claiming that if anybody were able to build a Flash player, then developers would have to support buggy players. Aral recently made a foray into building a site using CSS for layout. Now that he’s experienced the pain of cross-browser, cross-platform development, the last thing he wants is to port that pain over to the Flash environment. I see his point but personally I’m willing to pay the price for working with standardized formats… even if I sometimes do find myself tearing my hair out over some browser’s inconsistent rendering of some CSS rule.

The standardization of HTML, CSS and ECMAScript means that, in theory, anyone can make a Web browser. While I hope that remains just a theory (I don’t want any more browsers, thank you very much) that bodes very well for the longevity of data written in those formats.

Of that trio of formats, the one that’s most directly relevant to information storage and accessibility is HTML. It’s also a vital component in another trio of technologies: HTTP, URLs and HTML. If I had any slides, I’d probably be showing you a Venn diagram right now with HTML as the common component bridging the infrastructure and the content of the World Wide Web.

I’ve had the great pleasure of meeting some of the people who worked with Tim Berners-Lee at CERN ‘round about the time that he created the Hypertext Markup Language, the World Wide Web and the first Web browser. One of those people, the lovely Håkon Wium Lie was so enamoured with HTML he placed a bet that the language would be around for at least 50 years. That’s a good start. That’s in a different shearing layer to most of the file formats that our computers read today.

The Web was not the first distributed network of documents. Tim Berners-Lee stood on the shoulders of giants like Vannevar Bush and Doug Engelbart. HTML is far from the best possible hypertext system. Other systems envisioned two-way linkage and Ted Nelson’s idea of transclusion would be a welcome addition to the World Wide Web.

The strength of HTML is its simplicity. Simplicity beats complexity for many of the same reasons that open beats closed. Simple formats are more likely to have a longer lifespan. Yes, it can sometimes feel limiting to work with a relatively small number of HTML elements but on balance, I don’t mind paying that price. Remember what I said about constraints breeding creativity? Just look at the amazing multi-faceted Web that we’ve managed to construct with this simple technology.

The same simplicity that informs HTML extends right down the stack into the infrastructure of the Hypertext Transfer Protocol. It’s not just simple, it’s downright dumb. By design. In retrospect, given the simple, open nature of HTTP plus URLs plus HTML, the rise of the stupid network looks inevitable.

As you can probably tell, I’m a big fan of HTML. Not only do I believe it to be a relatively durable format, I believe its simplicity lends itself well to accessibility. The most obvious example of this is the way that HTML can be interpreted by a screen reader. But that’s just one example of information stored in markup being transformed into another format (in this case, speech). Another example would be transforming information from markup onto a piece of paper by printing out a web page.

I’ve come to realize that there are fundamentally two kinds of web designer. On one side, you’ve got the people who, perhaps with a background in print, think that when an HTML document is rendered on a screen in a browser, that’s the end of the line. For them, markup, CSS and JavaScript are the means to that end. Then there’s the other kind of web designer. Let’s call them the professionals. These are the people who realise that the very strength of the Web is the fact that you don’t know how someone is going to consume your information. They might have it printed out, they might have it read out or they might view it on a screen but even then, who knows what size that screen will be or what kind of device the screen is attached to? It might be a computer, it might be a mobile phone, it might be a fridge. How do you design for that?

The glib answer is to surrender control and embrace flexibility. Instead of battling against the anarchic nature of the Web, go with it.

I’m sure that piece of advice is old news to you but I think you can take it further. Embrace flexibility in your attitude towards accessibility.

We nerds tend to be a logical bunch. We like looking at the world as a binary system where there’s a right way and a wrong way to do something. But accessibility isn’t that simple. It’s not black and white — it’s a big messy shade of grey. Reducing accessibility down to a Boolean value is harmful.

Who was at @media last year? Remember when Joe Clark gave a nuanced and well-argued presentation entitled When Web Accessibility Is Not Your Problem? Before he had even left the stage, people were already claiming that Joe Clark was saying “Web accessibility is not your problem.”

This Reductio ad Absurdum has got to stop. It even creeps into our thinking about users. We start thinking about disability as a permanent state, either one or zero. It isn’t that simple. If I’m suffering from a dearth of sleep or a surfeit of alcohol, I am cognitively disabled. If I’m trying to use the trackpad on my laptop while I’m squashed into a seat on the train from Brighton to London, I am motor-impaired.

Also, let’s stop talking about making websites accessible. Instead, let’s talk about keeping websites accessible. I’m not saying that HTML is a magic bullet but as long as you are using the most semantically appropriate elements to mark up your content, you are creating something is, by default, accessible. It’s only afterwards, when we start adding the bells and whistles, that the problems can begin.

Don’t get me wrong: I’m not saying that we should censor ourselves and stifle our innovative ideas. I’m just talking about having a good baseline of solid structure. For a start, don’t fuck with links and forms.

If you ask me what technology I think every web designer should know, I’m not going to answer with CSS or Ajax or any programming language. No, I think that every web designer should know the difference between GET and POST. Know when to use a link and when to use a form. This is basic stuff that was built into the infrastructure of the Web from day one.

GET and POST aren’t the only methods that were created at the birth of the Web. Tim Berners-Lee also gave us the lesser-known PUT and DELETE. From the start, the World Web Wide was conceived as a read/write environment. It just didn’t turn out that way …until now.

Speaking for myself, I’ve found that I’m increasingly using the Web to publish information as well as consume it. I’ve got a bookmarks folder called “my other head” which contains links to the services I use daily: Flickr, Twitter, Pownce, Magnolia. They aren’t just websites, they are publishing tools. On today’s Web, I read and write in equal measure.

Accessibility guidelines that deal with Web content just don’t cut it any more. Guidelines intended for authoring tools are more applicable (if I had my way, the number one guideline would be “don’t fuck with links and forms”).

Accessibility doesn’t just mean that everyone should be able to consume what’s on the Web, it also means that everyone should be able to publish on the Web.

On the face of it, the current situation does not look good. Most social media sites have dreadful markup, obtrusive JavaScript and inflexible designs. But at the same time, they have a pervasive sense of openness that I find very encouraging indeed. The shared ethos is that this is your data so you should have access to it.

These services provide the ability to read and write information not just through an HTML page rendered in a browser. They offer the same access in a multiplicity of ways from the simplicity of microformats through to RSS and right up to fully-fledged APIs. The most successful social media websites are the ones where you don’t have to visit the site at all.

Time for another tired old cliché: information wants to be free. As trite as this sounds, I think that on the Web it’s fundamentally true. Lack of access to data is damage. People will find a way to route around it.

Matthew Somerville excels at routing around the damage of inaccessibility. He’s the guy who built the accessible version of the Odeon cinema listings. He also built traintimes.org.uk, a more accessible way of getting train timetable information. He had to scrape the original websites to build these. That’s hard work. APIs provide an easier way for us to create alternate, accessible versions of valuable services.

If APIs are an accessibility feature, then we need to change how we judge websites accordingly. Suppose we’re looking at a web page with a list of stories. If the document doesn’t make good use of headers — h1, h2, etc. — then that’s a minus point. But if there’s link to an RSS equivalent, then that’s a plus point.

The more and varied the formats in which you can access data, the more accessible that data is. I realise that this flies in the face of the programming principle of DRY: Don’t Repeat Yourself. But really, you can never have too much data.

I’m not suggesting that any inaccessible website that provides an API automatically receives a “get out of a jail free” card. But I do think that the API offers more potential solutions to fixing the accessibility issues. Instead of bitching and moaning about bad markup and crappy Ajax, we could more constructively use our time hacking on the API to provide a more accessible alternative.

The idea that information must reside on one specific website is dying. I hope that outdated marketing terms like “eyeballs” and “stickiness” die along with it.

As with any great change, there’s plenty of fear. If you have a business model that is based on the premise that some data is centralised, scarce and closed, you are backing a losing horse. The inaccessibility of that model dooms it.

There is a spirit of openness and collaboration that has spread inexorably through the Web since its creation. That spirit extends beyond data formats and technology. Our concepts of ownership and property are also changing. Try to ignore any whiff of socialism you might detect — this process is much more natural and inevitable.

So we come to the most important and the most contentious quality of openness: the right to information.

In this country, we suffer many affronts to our right to information. We have to pay to access ordnance survey data that was gathered using our tax money. Open Street Map and Free The Postcode are the natural responses to these most egregious of insults. People are beginning to ask for other data too. The Guardian is spearheading a campaign called Free Our Data to do exactly what it says on the tin.

Data that comes laden with restrictive licensing is crippled. When those restrictions are encoded into the format itself, the data is doomed. I’m talking about what is so euphemistically referred to as Digital Rights Management.

Here’s one last tired old cliché, this one from the sphere of anthropology. If a visitor from another planet came to Earth, what would they make of our society?

This is the very situation that Iain M. Banks describes in his short story State Of The Art. A visitor from a post-singularity culture, called simply The Culture, looks down from her ship above Earth and reflects on her recent sojourn there:

I stroked one of Tagm’s hands, gazed again at the slowly revolving planet, my gaze flicking in one glance from pole to equator. ‘You know, when I was in Paris, seeing Linter for the first time, I was standing at the top of some steps in the courtyard where Linter’s place was, and I looked across it and there was a little notice on the wall saying it was forbidden to take photographs of the courtyard without the man’s permission.’ I turned to Tagm. ‘They want to own the light!’

They want to own the light. They really do. They call it plugging the analogue hole. Even DRMd images and video must eventually be converted into photons. Even DRMd audio must eventually be converted into vibrations in the air. That’s the analogue hole. They don’t just want to own the light, they want to own our very culture.

Every day we write words, we record videos, we take photographs. We also read, we watch movies, we listen to music, we look at works of art. We are contributing to a digital record that is an order of magnitude greater than the Domesday Book. This is more than just data. This is who we are. It must be preserved. It must be accessible.

It’s time to take sides. It would be hyperbole to describe it as a battle between good and evil but it’s no exaggeration to say it’s a battle between good and bad.

We can either spend our time and effort locking data up into closed formats with restrictive licensing. Or we can make a concerted effort to act in the spirit of the Web: standards, simplicity, sharing… these are the qualities of openness that will help us preserve our culture. If we want to be remembered for a culture of accessibility, we must make a commitment to open data.

Licence

This presentation is licenced under a Creative Commons attribution licence. You are free to:

Share
Copy, distribute and transmit this presentation.
Remix
Adapt the presentation.

Under the following conditions:

Attribution
You must attribute the presentation to Jeremy Keith.

Further Reading

The Code Book How Buildings Learn The Cogwheel Brain Weaving The Web Glut: Mastering Information Through The Ages The State Of The Art Free Culture: The Nature and Future of Creativity

OAuth support for Google Accounts and Contacts API - OAuth | Google Groups

As promised by Kevin Marks in the Q&A after my panel at South by Southwest, the Google Contacts API now supports OAuth. w00t!