Tags: cc

52

sparkline

Starting out

I had a really enjoyable time at Codebar Brighton last week, not least because Morty came along.

I particularly enjoy teaching people who have zero previous experience of making a web page. There’s something about explaining HTML and CSS from first principles that appeals to me. I especially love it when people ask lots of questions. “What does this element do?”, “Why do some elements have closing tags and others don’t?”, “Why is it textarea and not input type="textarea"?” The answer usually involves me going down a rabbit-hole of web archeology, so I’m in my happy place.

But there’s only so much time at Codebar each week, so it’s nice to be able to point people to other resources that they can peruse at their leisure. It turns out that’s it’s actually kind of tricky to find resources at that level. There are lots of great articles and tutorials out there for professional web developers—Smashing Magazine, A List Apart, CSS Tricks, etc.—but no so much for complete beginners.

Here are some of the resources I’ve found:

  • MarkSheet by Jeremy Thomas is a free HTML and CSS tutorial. It starts with an explanation of the internet, then the World Wide Web, and then web browsers, before diving into HTML syntax. Jeremy is the same guy who recently made CSS Reference.
  • Learn to Code HTML & CSS by Shay Howe is another free online book. You can buy a paper copy too. It’s filled with good, clear explanations.
  • Zero to Hero Coding by Vera Deák is an ongoing series. She’s starting out on her career as a front-end developer, so her perspective is particularly valuable.

If I find any more handy resources, I’ll link to them and tag them with “learning”.

A decade on Twitter

I wrote my first tweet ten years ago.

That’s the tweetiest of tweets, isn’t it? (and just look at the status ID—only five digits!)

Of course, back then we didn’t call them tweets. We didn’t know what to call them. We didn’t know what to make of this thing at all.

I say “we”, but when I signed up, there weren’t that many people on Twitter that I knew. Because of that, I didn’t treat it as a chat or communication tool. It was more like speaking into the void, like blogging is now. The word “microblogging” was one of the terms floating around, grasped by those of trying to get to grips with what this odd little service was all about.

Twenty days after I started posting to Twitter, I wrote about how more and more people that I knew were joining :

The usage of Twitter is, um, let’s call it… emergent. Whenever I tell anyone about it, their first question is “what’s it for?”

Fair question. But their isn’t really an answer. You send messages either from the website, your mobile phone, or chat. What you post and why you’d want to do it is entirely up to you.

I was quite the cheerleader for Twitter:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

“Twittering.” Don’t laugh. “Tweeting” sounded really silly at first too.

Now at this point, I could start reminiscing about how much better things were back then. I won’t, but it’s interesting to note just how different it was.

  • The user base was small enough that there was a public timeline of all activity.
  • The characters in your username counted towards your 140 characters. That’s why Tantek changed his handle to be simply “t”. I tried it for a day. I think I changed my handle to “jk”. But it was too confusing so I changed it back.
  • We weren’t always sure how to write our updates either—your username would appear at the start of the message, so lots of us wrote our updates in the third person present (Brian still does). I’m partial to using the present continuous. That was how I wrote my reaction to Chris’s weird idea for tagging updates.

I think about that whenever I see a hashtag on a billboard or a poster or a TV screen …which is pretty much every day.

At some point, Twitter updated their onboarding process to include suggestions of people to follow, subdivided into different categories. I ended up in the list of designers to follow. Anil Dash wrote about the results of being listed and it reflects my experience too. I got a lot of followers—it’s up to around 160,000 now—but I’m pretty sure most of them are bots.

There have been a lot of changes to Twitter over the years. In the early days, those changes were driven by how people used the service. That’s where the @-reply convention (and hashtags) came from.

Then something changed. The most obvious sign of change was the way that Twitter started treating third-party developers. Where they previously used to encourage and even promote third-party apps, the company began to crack down on anything that didn’t originate from Twitter itself. That change reflected the results of an internal struggle between the people at Twitter who wanted it to become an open protocol (like email), and those who wanted it to become a media company (like Yahoo). The media camp won.

Of course Twitter couldn’t possibly stay the same given its incredible growth (and I really mean incredible—when it started to appear in the mainstream, in films and on TV, it felt so weird: this funny little service that nerds were using was getting popular with everyone). Change isn’t necessarily bad, it’s just different. Your favourite band changed when they got bigger. South by Southwest changed when it got bigger—it’s not worse now, it’s just very different.

Frank described the changing the nature of Twitter perfectly in his post From the Porch to the Street:

Christopher Alexander made a great diagram, a spectrum of privacy: street to sidewalk to porch to living room to bedroom. I think for many of us Twitter started as the porch—our space, our friends, with the occasional neighborhood passer-by. As the service grew and we gained followers, we slid across the spectrum of privacy into the street.

I stopped posting directly to Twitter in May, 2014. Instead I now write posts on my site and then send a copy to Twitter. And thanks to the brilliant Brid.gy, I get replies, favourites and retweets sent back to my own site—all thanks to Webmention, which just become a W3C proposed recommendation.

It’s hard to put into words how good this feels. There’s a psychological comfort blanket that comes with owning your own data. I see my friends getting frustrated and angry as they put up with an increasingly alienating experience on Twitter, and I wish I could explain how much better it feels to treat Twitter as nothing more than a syndication service.

When Twitter rolls out changes these days, they certainly don’t feel like they’re driven by user behaviour. Quite the opposite. I’m currently in the bucket of users being treated to new @-reply behaviour. Tressie McMillan Cottom has written about just how terrible the new changes are. You don’t get to see any usernames when you’re writing a reply, so you don’t know exactly how many people are going to be included. And if you mention a URL, the username associated with that website may get added to the tweet. The end result is that you write something, you publish it, and then you think “that’s not what I wrote.” It feels wrong. It robs you of agency. Twitter have made lots of changes over the years, but this feels like the first time that they’re going to actively edit what you write, without your permission.

Maybe this is the final straw. Maybe this is the change that will result in long-time Twitter users abandoning the service. Maybe.

Me? Well, Twitter could disappear tomorrow and I wouldn’t mind that much. I’d miss seeing updates from friends who don’t have their own websites, but I’d carry on posting my short notes here on adactio.com. When I started posting to Twitter ten years ago, I was speaking (or microblogging) into the void. I’m still doing that ten years on, but under my terms. It feels good.

I’m not sure if my Twitter account will still exist ten years from now. But I’m pretty certain that my website will still be around.

And now, if you don’t mind…

I’m off to grab some lunch.

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Enhance! Conf!

Two weeks from now there will be an event in London. You should go to it. It’s called EnhanceConf:

EnhanceConf is a one day, single track conference covering the state of the art in progressive enhancement. We will look at the tools and techniques that allow you to extend the reach of your website/application without incurring additional costs.

As you can probably guess, this is right up my alley. Wild horses wouldn’t keep me away from it. I’ve been asked to be Master of Ceremonies for the day, which is a great honour. Luckily I have some experience in that department from three years of hosting Responsive Day Out. In fact, EnhanceConf is going to run very much in the mold of Responsive Day Out, as organiser Simon explained in an interview with Aaron.

But the reason to attend is of course the content. Check out that line-up! Now that is going to be a knowledge-packed day: design, development, accessibility, performance …these are a few of my favourite things. Nat Buckley, Jen Simmons, Phil Hawksworth, Anna Debenham, Aaron Gustafson …these are a few of my favourite people.

Tickets are still available. Use the discount code JEREMYK to get a whopping 15% off the ticket price.

There’s also a scholarship:

The scholarships are available to anyone not normally able to attend a conference.

I’m really looking forward to EnhanceConf. See you at RSA House on March 4th!

Testing

It’s tempting to think of testing with screen-readers as being like testing with browsers. With browser testing, you’re checking to see how a particular piece of software deals with the code you’re throwing at it. A screen reader is a piece of software too, so it makes sense to approach it the same way, right?

I don’t think so. I think it’s really important that if someone is going to test your site with a screen reader, it should be someone who uses a screen reader every day.

Think of it this way: you wouldn’t want a designer or developer to do usability testing by testing the design or code on themselves. That wouldn’t give you any useful data. They’re already familiar with what problems the design is supposed to be solving, and how the interface works. That’s why you need to do usability testing with someone from outside, someone who wasn’t involved in the design or development process.

It’s no different when it comes to users of assistive technology. You’re not trying to test their technology; you’re trying to test how well the thing you’re building works for the person using the technology.

In short:

Don’t think of screen-reader testing as a form of browser testing; think of it as a form of usability testing.

Year’s end

I usually write a little post at the end of each year, looking back at the previous 365 days. But this time I feel like I’ve already done that. I’ve already written about my year of learning with Charlotte, which describes what I’ve been doing at work for the last twelve months.

Apart from that, there isn’t much more to say about 2015. And that’s a good thing. 2014 was a year tainted by death so I’m not complaining about having just had a year with no momentous events.

The worst that could be said of my 2015 is that Clearleft went through a tough final quarter of the year (some big projects ended up finishing around the same time, which left us floundering for new business—something we should have foreseen), but in the grand scheme of things, that’s not exactly a tragedy.

Mostly when I think back on the year, my memories are pleasant ones. I travelled to interesting places, ate some great food, and spent time with lovely people—Jenn and Sutter’s wedding being a beautiful blend of all three.

So instead of doing a year-end roundup of my own, I’ll just point to some others: James, Alice, Laura, Eliot, Remy, and Brad.

I look forward to reading more posts on their websites in 2016. I hope that you’ll be publishing on your website this year too.

Happy new year!

Japan Hackfarm Anna and Cennydd Rachel at Jon and Hannah's wedding Bletchley Park Beerleft Brad visits Brighton Visiting my mother in Ireland Jessica and Emil Homebrew Website Club Jessica and Dan Jenn and Sutter Me and Brian Clearleft turns 10 Jessica in Austin Clearleft interns Christmas jumpers Christmas in Seattle

Mind set

Whenever I have a difference of opinion with someone, I try to see things from their perspective. But sometimes I’m not very good at it. I need to get better.

Here’s an example: I think that users of small-screen touch-enabled devices should be able to pinch-to-zoom content on the web. That idea was challenged twice in recent times:

  1. The initial meta viewport element in AMP HTML demanded that pinch-to-zoom be disabled (it has since been relaxed).
  2. WebKit is removing the 350ms delay on tap …but only if the page disables pinch-to-zoom (a bug has been filed).

In both cases, I strongly disagreed with the decision to disable what I believe is a vital accessibility feature. But the strength of my conviction is irrelevant. If anything, it is harmful. The case for maintaining accessibility was so obvious to me, I acted as though it were self-evident to everyone. But other people have different priorities, and that’s okay.

I should have stopped and tried to see things from the perspective of the people implementing these changes. Nobody would deliberately choose to remove an important accessibility feature without good reason, so what would those reasons be? Does removing pinch-to-zoom enhance performance? If so, that’s an understandable reason to mandate the strict meta viewport element. I still disagree with the decision, but now when I argue against it, I can approach it from that angle. Instead of dramatically blustering about how awful it is to remove pinch-to-zoom, my time would have been better spent calmly saying “I understand why this decision has been made, but here’s why I think the accessibility implications are too severe…”

It’s all too easy—especially online—to polarise just about any topic into a binary black and white issue. But of course the more polarised differences of opinion become, the less chance there is of changing those opinions.

If I really want to change someone’s mind, then I need to make the effort to first understand their mind. That’s going to be far more productive than declaring that my own mind is made up. After all, if I show no willingness to consider alternative viewpoints, why should they?

There’s an old saying that before criticising someone, you should walk a mile in their shoes. I’m going to try to put that into practice, and not for the two obvious reasons:

  1. If we still disagree, now we’re a mile away from each other, and
  2. I’ve got their shoes.

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

Relinkification

On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.

Commons People

Creative Commons licences have a variety of attributes, that can be combined together:

  • No-derivatives: the work can be reused, but not altered.
  • Attribution: the work must be credited.
  • Share-alike: any derivates must share the same licence.
  • Non-commercial: the work can be used, but not for commercial purposes.

That last one is important. If you don’t attach a non-commercial licence to your work, then your work can be resold for profit (it might be remixed first, or it might have to include your name—that all depends on what other attributes you’ve included in the licence).

If you’re not comfortable with anyone reselling your work, you should definitely choose a non-commercial licence.

Flickr is planning to sell canvas prints of photos that have been licensed under Creative Commons licenses that don’t include the non-commercial clause. They are perfectly within their rights to do this—this is exactly what the licence allows—but some people are very upset about it.

Jeffrey says it’s short-sighted and sucky because it violates the spirit in which the photos were originally licensed. I understand that feeling, but that’s simply not the way that the licences work. If you want to be able to say “It’s okay for some people to use my work for profit, but it’s not okay for others”, then you need to apply a more restrictive licence (like copyright, or Creative Commons Non-commercial) and then negotiate on a case-by-case basis for each usage.

But if you apply a licence that allows commercial usage, you must accept that there will be commercial usages that you aren’t comfortable with. Frankly, Flickr selling canvas prints of your photos is far from a worst-case scenario.

I licence my photos under a Creative Commons Attribution licence. That means they can be used anywhere—including being resold for profit—as long as I’m credited as the photographer. Because of that, my photos have shown up in all sorts of great places: food blogs, Wikipedia, travel guides, newspapers. But they’ve also shown up in some awful places, like Techcrunch. I might not like that, but it’s no good me complaining that an organisation (even one whose values I disagree with) is using my work exactly as the licence permits.

Before allowing commercial use of your creative works, you should ask “What’s the worst that could happen?” The worst that could happen includes scenarios like white supremacists, misogynists, or whacko conspiracy theorists using your work on their websites, newsletters, and billboards (with your name included if you’ve used an attribution licence). If you aren’t willing to live with that, do not allow commercial use of your work.

When I chose to apply a Creative Commons Attribution licence to my photographs, it was because I decided I could live with those worst-case scenarios. I decided that the potential positives outweighed the potential negatives. I stand by that decision. My photos might appear on a mudsucking site like Techcrunch, or get sold as canvas prints to make money for Flickr, but I’m willing to accept those usages in order to allow others to freely use my photos.

Some people have remarked that this move by Flickr to sell photos for profit will make people think twice about allowing commercial use of their work. To that I say …good! It has become clear that some people haven’t put enough thought into their licensing choices—they never asked “What’s the worst that could happen?”

And let’s be clear here: this isn’t some kind of bait’n’switch by Flickr. It’s not like liberal Creative Commons licensing is the default setting for photos hosted on that site. The default setting is copyright, all rights reserved. You have to actively choose a more liberal licence.

So I’m trying to figure out how it ended up that people chose the wrong licence for their photos. Because I want this to be perfectly clear: if you chose a licence that allows for commercial usage of your photos, but you’re now upset that a company is making commercial usage of your photos, you chose the wrong licence.

Perhaps the licence-choosing interface could have been clearer. Instead of simply saying “here’s what attribution means” or “here’s what non-commercial means”, perhaps it should also include lists of pros and cons: “here’s some of the uses you’ll be enabling”, but also “here’s the worst that could happen.”

Jen suggests a new Creative Commons licence that essentially inverts the current no-derivates licence; this would be a “derivative works only” licence. But unfortunately it sounds a bit too much like a read-my-mind licence:

What if I want to allow someone to use a photo in a conference slide deck, even if they are paid to present, but I don’t want to allow a company that sells stock photos to snatch up my photo and resell it?

Jen’s post is entitled I Don’t Want “Creative Commons By” To Mean You Can Rip Me Off …but that’s exactly what a Creative Commons licence without a non-commercial clause can mean. Of course, it’s not the only usage that such a licence allows (it allows many, many positive scenarios), but it’s no good pretending it were otherwise. If you’re not comfortable with that use-case, don’t enable it. Personally, I’m okay with that use-case because I believe it is offset by the more positive usages.

And that’s an important point: this is a personal decision, and not one to be taken lightly. Personally, I’m not a professional or even amateur photographer, so commercial uses of my photos are fine with me. Most professional photographers wouldn’t dream of allowing commercial use of their photos without payment, and rightly so. But even for non-professionals like myself, there are implications to allowing commercial use (one of those implications being that there will be usages you won’t necessarily be happy about).

So, going back to my earlier question, does the licence-choosing interface on Flickr make the implications of your choice clear?

Here’s the page for applying licences. You get to it by going to “Settings”, then “Privacy and Permissions,” then under “Defaults for new uploads,” the setting “What license will your content have.”

On that page, there’s a heading “Which license is right for you?” That has three hyperlinks:

  1. A page on Creative Commons about the licences,
  2. Frequently Asked Questions,
  3. A page of issues specifically related to images.

In that list of Frequently Asked Questions, there’s What things should I think about before I apply a Creative Commons license? and How should I decide which license to choose? There’s some good advice in there (like when in doubt, talk to a lawyer), but at no point does it suggest that you should ask yourself “What’s the worst that could happen?”

So it certainly seems that Flickr could be doing a better job of making the consequences of your licensing choice clearer. That might have the effect of making it a scarier choice, and it might put some people off using Creative Commons licences. But I don’t think that’s a bad thing. I would much rather that people made an informed decision.

When I chose to apply a Creative Commons Attribution licence to my photos, I did not make the decision lightly. I assumed that others who made the same choice also understood the consequences of that decision. Now I’m not so sure. Now I think that some people made uninformed licensing decisions in the past, which explains why they’re upset now (and I’m not blaming them for making the wrong decision—Flickr, and even Creative Commons, could have done a better job of providing relevant, easily understable information).

But this is one Internet Outrage train that I won’t be climbing aboard. Alas, that means I must now be considered a corporate shill who’s sold out to The Man.

Pointing out that a particular Creative Commons licence allows the Klu Klux Klan to use your work isn’t the same as defending the Klu Klux Klan.

Pointing out that a particular Creative Commons licence allows a hardcore porn film to use your music isn’t the same as defending hardcore porn.

Pointing out that a particular Creative Commons licence allows Yahoo to flog canvas prints of your photos isn’t the same as defending Yahoo.

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.

Still…

There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.

Browsiness

Cennydd wrote a really good post recently called Why don’t designers take Android seriously?

I completely agree with his assessment that far too many developers are ignoring or dismissing Android for two distasteful reasons:

  1. Android is difficult
  2. User behaviours are different:

Put uncharitably, the root issue is “Android users are poor”.

But before that, Cennydd compares the future trajectories of other platforms and finds them wanting in comparison to Android: Windows, iOS, …the web.

On that last comparison, I (unsurprisingly) disagree. But it’s not because I think the web is a superior platform; it’s because I don’t think the web is a platform at all.

I wrote about this last month:

The web is not a platform. It’s a continuum.

I think it’s a category error to compare the web to Android or Windows or iOS. It’s like comparing Coca-Cola, Pepsi, and liquid. The web is something that permeates the platforms. From one point of view, this appears to make the web less than the operating system that someone happens to be using to access it. But in the same way that a chicken is an egg’s way of reproducing and a scientist is the universe’s way of observing itself, an operating system is the web’s way of providing access to itself.

Wait a minute, though …Cennydd didn’t actually compare Android to the web. He compared Android to the web browser. Like I’ve said before:

We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.

But Cennydd’s comparison does raise an interesting question: what is a web browser exactly? Answering that question probably requires an answer to the question: what is the web?

(At this point you might be thinking, “Ah, this is just semantics!” and you’d be right. Abandon ship here if you feel that way. But to describe something as “just semantics” is like pointing at all the written works in every library and saying “but they’re just words”, or taking in the entire trajectory of human civilisation and saying “but those are just ideas”. So yeah, this is “just” semantics.)

So what is the web? Well the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP. So FTP is not the web. Email is not the web. Gopher is not the web.

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

I also don’t think that it’s the file formats themselves that define the web. Don’t get me wrong: I love HTML …and I have nothing against CSS or JavaScript. But if HTML were to disappear, the tears I would weep would not be so much for the format itself, but for the two decades of culture that have been stored with it.

I was re-reading Weaving The Web and in that book, Tim Berners-Lee describes his surprise when people started using HTML to mark up their content. He expected HTML to be used for indices that would point to the URLs of the actual content, which could be in any file format (PDF, word processessing documents, or whatever). It turned out that HTML had just enough expressiveness and grokability to be used instead of those other formats.

So I certainly don’t consider anything that happens to be written using HTML, CSS, and JavaScript to automatically be a part of the web. I can open up a text editor and make an HTML document but as long as it sits on my computer instead of being addressable by a URL, it’s not part of the web. Likewise, a native app might be powered by CSS and JavaScript under the hood, but without a URL, it’s not part of the web.

Perhaps then, a web browser is something that can access URLs. Certainly in pretty much every example of a web browser throughout the web’s history, the URL has been front and centre: if the web were a platform, the URL bar would be its command line.

But, like the rise of HTML, the visibility of the URL in a web browser is an accident of history. It was added almost as an afterthought as a power-user feature: why would most people care what the URL of the content happens to be? It’s the content itself that matters, and you’d get to that content not by typing URLs, but by following hyperlinks.

There’s an argument to be made that, with the rise of search engines, the visibility of URLs has become less important. See, for example, the way that every advertisement for a website on the Tokyo subway doesn’t show a URL; it shows what to type into a search engine instead (and I’ve started seeing this in some TV adverts here in the UK too).

So a web browser that doesn’t expose the URLs of what it’s rendering is still a web browser.

Now imagine a browser that you install on your device that doesn’t expose URLs, but under the hood it is navigating between URLs using HTTP, and rendering the content (images, JavaScript, CSS, HTML, JSON, whatever). That’s a pretty good description of many native apps. There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

Like Paul said:

Monolithic browsers are not the only User Agent.

I was initially confused when Anna tweeted:

Reading the responses to @Cennydd’s tweet about designers needing to pay attention to Android. The web is fragmented. That’s our job.

I understood Cennydd’s point to be about native apps, not the web. But if, as I’ve just said, many native apps are in fact web browsers, does that mean that making native apps is a form of web development?

I don’t think so. I think making a native app has much more in common with making a web browser than it does with making a web site/app/thang. Certainly the work that Clearleft has done in this area felt that way: the Channel 4 News app is a browser for Channel 4 News; the Evo iPad app is a browser for Evo.

So if your job involves making browsers like those, then yes, you absolutely should be paying more attention to Android, for all the reasons that Cennydd suggests.

But if, like me, you have zero interest in making browsers—whether it’s a browser for Android, iOS, OS X, Windows, Blackberry, Linux, or NeXT—you should still be paying attention to Android because it’s just one of the many ways that people will be accessing the web.

It’s all too easy for us to fall into the trap of thinking that people will only be using traditional monolithic web browsers to access what we build. The truth is that our work will be accessed on the desktop, on mobile, and on tablets, but also on watches, on televisions, and sure, even fridges, but also on platforms that may not even have screens.

It’s certainly worth remembering that what you make will be viewed in the context of an artisanal browser. Like Jen says:

The “native apps are better” argument ignores the fact one of the most popular things to do in apps is read the web.

But just because we know that our work will be accessed on a whole range of devices and platforms doesn’t mean that we should optimise for those specific devices and platforms. That just won’t scale. The only sane future-friendly approach is to take a device-agnostic, platform-agnostic approach and deliver something that’s robust enough to work in this stunningly-wide range of browsers and user-agents (hint: progressive enhancement is your friend).

I completely agree with Cennydd: I think that ignoring Android is narrow-minded, blinkered and foolish …but I feel the same way about ignoring Windows, Blackberry, Nokia, or the Playstation. I also think it would be foolish to focus on any one of those platforms at the expense of others.

I love the fact that the web can be accessed on so many platforms and devices by so many different kinds of browsers. I only wish there more: more operating systems, more kinds of devices, more browsers. Any platform that allows more people to access the web is good with me. That’s why I, like Cennydd, welcome the rise of Android.

Stop seeing fragmentation. Start seeing diversity.

ABC

When I finally unveiled the redesigned and overhauled version of The Session at the end of 2012, it was the culmination of a lot of late nights and weekends. It was also a really great learning experience, one that I subsequently drew on to inform my An Event Apart presentation, The Long Web.

As part of that presentation, I give a little backstory on the ABC format. It’s a way of notating music using nothing more than ASCII text. It begins with some JSON-like metadata about the tune—its title, time signature, and key—followed by the notes of the tune itself—uppercase and lowercase letters denote different octaves, and numbers can be used to denote length:

X: 1
T: Cooley's
R: reel
M: 4/4
L: 1/8
K: Edor
|:D2|EBBA B2 EB|B2 AB dBAG|FDAD BDAD|FDAD dAFD|
EBBA B2 EB|B2 AB defg|afec dBAF|DEFD E2:|
|:gf|eB B2 efge|eB B2 gedB|A2 FA DAFA|A2 FA defg|
eB B2 eBgB|eB B2 defg|afec dBAF|DEFD E2:|

On The Session, a little bit of progressive enhancement produces a nice crisp SVG version of the sheet music at the user’s request (the non-JavaScript fallback is a server-rendered bitmap of the sheet music).

ABC notation dates back to the early nineties, a time of very limited bandwidth. Exchanging audio files or even images would have been prohibitively expensive. Having software installed on your machine that could convert ABC into sheet music or audio meant that people could share and exchange tunes through email, BBS, or even the then-fledgling World Wide Web.

In today’s world of relatively fast connections, ABC’s usefulness might seemed lessened. But in fact, it’s just as popular as it ever was. People have become used to writing (and even sight-reading) the format, and it has all the resilience that comes with being a text format; easily editable, and human-readable. It’s still the format that people use to submit new tune settings to The Session.

A little while back, I came upon another advantage of the ABC format, one that I had never previously thought of…

The Session has a wide range of users, of all ages, from all over the world, from all walks of life, using all sorts of browsers. I do my best to make sure that the site works for just about any kind of user-agent (while still providing plenty of enhancements for the most modern browsers). That includes screen readers. Some active members of The Session happen to be blind.

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

There’s one little thing that slightly irks me though. The ABC notation should be read out one letter at a time. But screen readers use a kind of fuzzy logic to figure out whether a set of characters should be spoken as a word:

Screen readers try to pronounce acronyms and nonsensical words if they have sufficient vowels/consonants to be pronounceable; otherwise, they spell out the letters. For example, NASA is pronounced as a word, whereas NSF is pronounced as “N. S. F.” The acronym URL is pronounced “earl,” even though most humans say “U. R. L.” The acronym SQL is not pronounced “sequel” by screen readers even though some humans pronounce it that way; screen readers say “S. Q. L.”

It’s not a big deal, and the screen reader user can explicitly request that a word be spoken letter by letter:

Screen reader users can pause if they didn’t understand a word, and go back to listen to it; they can even have the screen reader read words letter by letter. When reading words letter by letter, JAWS distinguishes between upper case and lower case letters by shouting/emphasizing the upper case letters.

But still …I wish there were some way that I could mark up the ABC notation so that a screen reader would know that it should be read letter by letter. I’ve looked into using abbr, but that offers no guarantees: if the string looks like a word, it will still be spoken as a word. It doesn’t look there’s any ARIA settings for this use-case either.

So if any accessibility experts out there know of something I’m missing, please let me know.

Update: I’ve added an aural CSS declaration of speak: spell-out (thanks to Martijn van der Ven for the tip), although I think the browser support is still pretty non-existent. Any other ideas?

Cheap’n’cheerful

I occasionally get sent some devices for the Clearleft device lab (which reminds me: thank you to whoever at Blackberry sent over the “Dev Alpha B” Blackberry 10).

Last week, an interesting little device showed up.

Cheap Android device

I had no idea who sent it. Was it a gaming device ordered by Anna?

The packaging was all in Chinese. Perhaps some foreign hackers were attempting to infiltrate our network through some clever social engineering.

It turns out that Rich had ordered it, having heard about it from Chris Heathcote who mentioned the device during his UX London talk.

It’s an S18 Mini Pad. You can pick one up for about £30. For that price, as Chris pointed out, you could just use it as an alarm clock (and it does indeed have an alarm clock app). But it’s also a touchscreen device with WiFi and a web browser …a really good web browser: it comes with Chrome. It’s an Android 4 device.

It has all sorts of issues. The touchscreen is pretty crap, for example. But considering the price, it’s really quite remarkable.

We’ve got to the point where all the individual pieces—WiFi, touchscreen, web browser, operating system—can be thrown together into one device that can be sold for around the thirty quid mark. And this is without any phone company subsidies.

Crap as it is, this device really excites me. A cheap mobile web-enabled device …I find that so much more thrilling than any Apple keynote.

The main issue

The inclusion of a main element in HTML has been the subject of debate for quite a while now. After Steve outlined the use cases, the element has now landed in the draft of the W3C spec. It isn’t in the WHATWG spec yet, but seeing as it is being shipped in browsers, it’s only a matter of time.

But I have some qualms about the implementation details so I fired off an email to the HTML working group with my concerns about the main element as it is current specced.

I’m curious as to why its use is specifically scoped to the body element rather than any kind of sectioning content:

Authors must not include more than one main element in a document.

I get the rationale behind the main element: it plugs a gap in the overlap between native semantics and ARIA landmark roles (namely role="main"). But if you look at the other elements that map to ARIA roles (header, footer, nav), those elements can be used multiple times in a single document by being scoped to their sectioning content ancestor.

Let’s say, for example, that I have a document like this, containing two header elements:

<body>
 <header>Page header</header>
 Page main content starts here
 <section>
  <header>Section header</header>
  Section main content
 </section>
 Page main content finishes here
</body>

…only the first header element (the one that’s scoped to the body element) will correspond to role="banner", right?

Similarly, in this example, only the final footer element will correspond to role=”contentinfo”:

<body>
 <header>Page header</header>
 Page main content starts here
 <section>
  <header>Section header</header>
  Section main content
  <footer>Section footer</footer>
 </section>
 Page main content finishes here
 <footer>Page footer</footer>
</body>

So what I don’t understand is why we can’t have the main element work the same way i.e. scope it to its sectioning content ancestor so that only the main element that is scoped to the body element will correspond to role="main":

<body>
 <header>Page header</header>
 <main>
  Page main content starts here
  <section>
   <header>Section header</header>
   <main>Section main content</main>
   <footer>Section footer</footer>
  </section>
  Page main content finishes here
 </main>
 <footer>Page footer</footer>
</body>

Here are the corresponding ARIA roles:

<body>
 <header>Page header</header> <!-- role="banner" -->
 <main>Page main content</main> <!-- role="main" -->
 <footer>Page footer</footer> <!-- role="contentinfo" -->
</body>

Why not allow the main element to exist within sectioning content (section, article, nav, aside) the same as header and footer?

<section>
 <header>Section header</header> <!-- no corresponding role -->
 <main>Section main content</main> <!-- no corresponding role -->
 <footer>Section footer</footer> <!-- no corresponding role -->
</section>

Deciding how to treat the main element would then be the same as header and footer. Here’s what the spec says about the scope of footers:

When the nearest ancestor sectioning content or sectioning root element is the body element, then it applies to the whole page.

I could easily imagine the same principle being applied to the main element. From an implementation viewpoint, this is already what browsers have to do with header and footer. Wouldn’t it be just as simple to apply the same parsing to the main element?

It seems a shame to introduce a new element that can only be used zero or one time in a document …I think the head and body elements are the only other elements with this restriction (and I believe the title element is the only element that is used precisely once per document).

It would be handy to be able to explicitly mark up the main content of an article or an aside—for exactly the same reasons that it would be useful to mark up the main content of a document.

So why not?

What technology wants

Technology enabled Sarah Churman to hear for the first time.

enabled her to capture that moment.

Networked technology enabled her to share that moment with the world.

enabled me to share it with you.

Hashcloud

Hashbangs. Yes, again. This is important, dammit!

When the topic first surfaced, prompted by Mike’s post on the subject, there was a lot of discussion. For a great impartial round-up, I highly recommend two posts by James Aylett:

There seems to be a general concensus that hashbang URLs are bad. Even those defending the practice portray them as a necessary evil. That is, once a better solution is viable—like the HTML5 History API—then there will no longer be any need for #! in URLs. I’m certain that it’s a matter of when, not if Twitter switches over.

But even then, that won’t be the end of the story.

Dan Webb has written a superb long-zoom view on the danger that hashbangs pose to the web:

There’s no such thing as a temporary fix when it comes to URLs. If you introduce a change to your URL scheme you are stuck with it for the forseeable future. You may internally change your links to fit your new URL scheme but you have no control over the rest of the web that links to your content.

Therein lies the rub. Even if—nay when—Twitter switch over to proper URLs, there will still be many, many blog posts and other documents linking to individual tweets …and each of those links will contain #!. That means that Twitter must make sure that their home page maintains a client-side routing mechanism for inbound hashbang links (remember, the server sees nothing after the # character—the only way to maintain these redirects is with JavaScript).

As Paul put it in such a wonderfully pictorial way, the web is agreement. Hacks like hashbang URLs—and URL shorteners—weaken that agreement.