Tags: access

43

sparkline

Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Unlabelled search fields

Adam Silver is writing a book on forms—you may be familiar with his previous book on maintainable CSS. In a recent article (that for some reason isn’t on his blog), he looks at markup patterns for search forms and advocates that we should always use a label. I agree. But for some reason, we keep getting handed designs that show unlabelled search forms. And no, a placeholder is not a label.

I had a discussion with Mark about this the other day. The form he was marking up didn’t have a label, but it did have a button with some text that would work as a label:

<input type="search" placeholder="…">
<button type="submit">
Search
</button>

He was wondering if there was a way of using the button’s text as the label. I think there is. Using aria-labelledby like this, the button’s text should be read out before the input field:

<input aria-labelledby="searchtext" type="search" placeholder="…">
<button type="submit" id="searchtext">
Search
</button>

Notice that I say “think” and “should.” It’s one thing to figure out a theoretical solution, but only testing will show whether it actually works.

The W3C’s WAI tutorial on labelling content gives an example that uses aria-label instead:

<input type="text" name="search" aria-label="Search">
<button type="submit">Search</button>

It seems a bit of a shame to me that the label text is duplicated in the button and in the aria-label attribute (and being squirrelled away in an attribute, it runs the risk of metacrap rot). But they know what they’re talking about so there may well be very good reasons to prefer duplicating the value with aria-label rather than pointing to the value with aria-labelledby.

I thought it would be interesting to see how other sites are approaching this pattern—unlabelled search forms are all too common. All the markup examples here have been simplified a bit, removing class attributes and the like…

The BBC’s search form does actually have a label:

<label for="orb-search-q">
Search the BBC
</label>
<input id="orb-search-q" placeholder="Search" type="text">
<button>Search the BBC</button>

But that label is then hidden using CSS:

position: absolute;
height: 1px;
width: 1px;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);

That CSS—as pioneered by Snook—ensures that the label is visually hidden but remains accessible to assistive technology. Using something like display: none would hide the label for everyone.

Medium wraps the input (and icon) in a label and then gives the label a title attribute. Like aria-label, a title attribute should be read out by screen readers, but it has the added advantage of also being visible as a tooltip on hover:

<label title="Search Medium">
  <span class="svgIcon"><svg></svg></span>
  <input type="search">
</label>

This is also what Google does on what must be the most visited search form on the web. But the W3C’s WAI tutorial warns against using the title attribute like this:

This approach is generally less reliable and not recommended because some screen readers and assistive technologies do not interpret the title attribute as a replacement for the label element, possibly because the title attribute is often used to provide non-essential information.

Twitter follows the BBC’s pattern of having a label but visually hiding it. They also have some descriptive text for the icon, and that text gets visually hidden too:

<label class="visuallyhidden" for="search-query">Search query</label>
<input id="search-query" placeholder="Search Twitter" type="text">
<span class="search-icon>
  <button type="submit" class="Icon" tabindex="-1">
    <span class="visuallyhidden">Search Twitter</span>
  </button>
</span>

Here’s their CSS for hiding those bits of text—it’s very similar to the BBC’s:

.visuallyhidden {
  border: 0;
  clip: rect(0 0 0 0);
  height: 1px;
  margin: -1px;
  overflow: hidden;
  padding: 0;
  position: absolute;
  width: 1px;
}

That’s exactly the CSS recommended in the W3C’s WAI tutorial.

Flickr have gone with the aria-label pattern as recommended in that W3C WAI tutorial:

<input placeholder="Photos, people, or groups" aria-label="Search" type="text">
<input type="submit" value="Search">

Interestingly, neither Twitter or Flickr are using type="search" on the input elements. I’m guessing this is probably because of frustrations with trying to undo the default styles that some browsers apply to input type="search" fields. Seems a shame though.

Instagram also doesn’t use type="search" and makes no attempt to expose any kind of accessible label:

<input type="text" placeholder="Search">
<span class="coreSpriteSearchIcon"></span>

Same with Tumblr:

<input tabindex="1" type="text" name="q" id="search_query" placeholder="Search Tumblr" autocomplete="off" required="required">

…although the search form itself does have role="search" applied to it. Perhaps that helps to mitigate the lack of a clear label?

After that whistle-stop tour of a few of the web’s unlabelled search forms, it looks like the options are:

  • a visually-hidden label element,
  • an aria-label attribute,
  • a title attribute, or
  • associate some text using aria-labelledby.

But that last one needs some testing.

Update: Emil did some testing. Looks like all screen-reader/browser combinations will read the associated text.

Accessible progressive disclosure revisited

I wrote a little while back about making an accessible progressive disclosure pattern. It’s very basic—just a few ARIA properties and a bit of JavaScript sprinkled onto some basic HTML. The HTML contains a button element that toggles the aria-hidden property on a chunk of markup.

Earlier this week I had a chance to hang out with accessibility experts Derek Featherstone and Devon Persing so I took the opportunity to pepper them with questions about this pattern. My main question was “Should I automatically focus the toggled content?”

Derek’s response was very perceptive. He wanted to know why I was using a button. Good question. When you think about it, what I’m doing is pointing from one element to another. On the web, we point with links.

There are no hard’n’fast rules about this kind of thing, but as Derek put it, it helps to think about whether the action involves controlling something (use a button) or taking the user somewhere (use a link). At first glance, the progressive disclosure pattern seems to be about controlling something—toggling the appearance of another element. But if I’m questioning whether to automatically focus that element, then really I’m asking whether I want to take the user to that place in the document—in other words, linking to it.

I decided to update the markup. Here’s what I had before:

<button aria-controls="content">Reveal</button>
<div id="content"></div>

Here’s what I have now:

<a href="#content" aria-controls="content">Reveal</a>
<div id="content"></div>

The logic in the JavaScript remains exactly the same:

  1. Find any elements that have an aria-controls attribute (these were buttons, now they’re links).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated link (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those links.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it in action on CodePen.

See the Pen Accessible toggle (link) by Jeremy Keith (@adactio) on CodePen.

Accessible progressive disclosure

Over on The Session I have a few instances of a progressive disclosure pattern. It’s just your basic show/hide toggle: click on a button; see some more content. For example, there’s a “download” button for every tune that displays options to download the tune in different formats (ABC and midi).

To begin with, I was using the :checked pseudo-class pattern that Charlotte has documented so well. I really like that pattern. It feels nice and straightforward. But then I got some feedback from someone using the site:

the link for midi files is no longer coming up on the tune pages. I am blind so I rely on the midi’s when finding tunes for my students.

I wrote back saying the link to download midi files was revealed by the “download” option. The response:

Excellent. I have it now, I was just looking for the midi button which wasn’t there. the actual download button doesn’t read as a button under each version of the tune but now I know it’s there I know what I am doing. I am using the JAWS screen reader.

This was just one person …one person who took the time to write to me. What about other screen reader users?

I dabbled around with adding role="button" to the checkbox or the label, but that felt really icky (contradicting the inherent role of those elements) and it didn’t seem to make much difference anyway.

I decided to refactor the progressive disclosure to use JavaScript instead just CSS. I wanted to make sure that accessibility was built into the functionality, rather than just bolted on. That’s why code I’ve written doesn’t rely on the buttons having a particular class value; instead the buttons must have an aria-controls attribute that associates the button with the element it toggles (in much the same way that a for attribute associates a label with a form field).

Here’s the logic:

  1. Find any elements that have an aria-controls attribute (these should be buttons).
  2. Grab the value of that aria-controls attribute (an ID).
  3. Hide the element with that ID by applying aria-hidden="true" and make that element focusable by adding tabindex="-1".
  4. Set aria-expanded="false" on the associated button (this attribute can be a bit confusing—it doesn’t mean that this element is not expanded; it means the element it controls is not expanded).
  5. Listen for click events on those buttons.
  6. Toggle the aria-hidden and aria-expanded when there’s a click event.
  7. When aria-hidden is set to false on an element (thereby revealing it), focus that element.

You can see it action on CodePen.

I’m still playing around with this. I think the :focus styles are probably far too subtle right now—see this excellent presentation from Laura Palmaro for more on that. I’m also not sure if the revealed content should automatically take focus. I’ll see if I can get some feedback from people on The Session using screen readers—there’s quite a few of them.

Feel free to use my code but you might want to check out Jason’s code to do the same thing—his is bound to be nicer to work with.

Update: In response to this discussion, I’ve decided not to automatically focus the expanded content.

Enhance! Conf!

Two weeks from now there will be an event in London. You should go to it. It’s called EnhanceConf:

EnhanceConf is a one day, single track conference covering the state of the art in progressive enhancement. We will look at the tools and techniques that allow you to extend the reach of your website/application without incurring additional costs.

As you can probably guess, this is right up my alley. Wild horses wouldn’t keep me away from it. I’ve been asked to be Master of Ceremonies for the day, which is a great honour. Luckily I have some experience in that department from three years of hosting Responsive Day Out. In fact, EnhanceConf is going to run very much in the mold of Responsive Day Out, as organiser Simon explained in an interview with Aaron.

But the reason to attend is of course the content. Check out that line-up! Now that is going to be a knowledge-packed day: design, development, accessibility, performance …these are a few of my favourite things. Nat Buckley, Jen Simmons, Phil Hawksworth, Anna Debenham, Aaron Gustafson …these are a few of my favourite people.

Tickets are still available. Use the discount code JEREMYK to get a whopping 15% off the ticket price.

There’s also a scholarship:

The scholarships are available to anyone not normally able to attend a conference.

I’m really looking forward to EnhanceConf. See you at RSA House on March 4th!

Testing

It’s tempting to think of testing with screen-readers as being like testing with browsers. With browser testing, you’re checking to see how a particular piece of software deals with the code you’re throwing at it. A screen reader is a piece of software too, so it makes sense to approach it the same way, right?

I don’t think so. I think it’s really important that if someone is going to test your site with a screen reader, it should be someone who uses a screen reader every day.

Think of it this way: you wouldn’t want a designer or developer to do usability testing by testing the design or code on themselves. That wouldn’t give you any useful data. They’re already familiar with what problems the design is supposed to be solving, and how the interface works. That’s why you need to do usability testing with someone from outside, someone who wasn’t involved in the design or development process.

It’s no different when it comes to users of assistive technology. You’re not trying to test their technology; you’re trying to test how well the thing you’re building works for the person using the technology.

In short:

Don’t think of screen-reader testing as a form of browser testing; think of it as a form of usability testing.

Mind set

Whenever I have a difference of opinion with someone, I try to see things from their perspective. But sometimes I’m not very good at it. I need to get better.

Here’s an example: I think that users of small-screen touch-enabled devices should be able to pinch-to-zoom content on the web. That idea was challenged twice in recent times:

  1. The initial meta viewport element in AMP HTML demanded that pinch-to-zoom be disabled (it has since been relaxed).
  2. WebKit is removing the 350ms delay on tap …but only if the page disables pinch-to-zoom (a bug has been filed).

In both cases, I strongly disagreed with the decision to disable what I believe is a vital accessibility feature. But the strength of my conviction is irrelevant. If anything, it is harmful. The case for maintaining accessibility was so obvious to me, I acted as though it were self-evident to everyone. But other people have different priorities, and that’s okay.

I should have stopped and tried to see things from the perspective of the people implementing these changes. Nobody would deliberately choose to remove an important accessibility feature without good reason, so what would those reasons be? Does removing pinch-to-zoom enhance performance? If so, that’s an understandable reason to mandate the strict meta viewport element. I still disagree with the decision, but now when I argue against it, I can approach it from that angle. Instead of dramatically blustering about how awful it is to remove pinch-to-zoom, my time would have been better spent calmly saying “I understand why this decision has been made, but here’s why I think the accessibility implications are too severe…”

It’s all too easy—especially online—to polarise just about any topic into a binary black and white issue. But of course the more polarised differences of opinion become, the less chance there is of changing those opinions.

If I really want to change someone’s mind, then I need to make the effort to first understand their mind. That’s going to be far more productive than declaring that my own mind is made up. After all, if I show no willingness to consider alternative viewpoints, why should they?

There’s an old saying that before criticising someone, you should walk a mile in their shoes. I’m going to try to put that into practice, and not for the two obvious reasons:

  1. If we still disagree, now we’re a mile away from each other, and
  2. I’ve got their shoes.

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.

Still…

There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.

Browsiness

Cennydd wrote a really good post recently called Why don’t designers take Android seriously?

I completely agree with his assessment that far too many developers are ignoring or dismissing Android for two distasteful reasons:

  1. Android is difficult
  2. User behaviours are different:

Put uncharitably, the root issue is “Android users are poor”.

But before that, Cennydd compares the future trajectories of other platforms and finds them wanting in comparison to Android: Windows, iOS, …the web.

On that last comparison, I (unsurprisingly) disagree. But it’s not because I think the web is a superior platform; it’s because I don’t think the web is a platform at all.

I wrote about this last month:

The web is not a platform. It’s a continuum.

I think it’s a category error to compare the web to Android or Windows or iOS. It’s like comparing Coca-Cola, Pepsi, and liquid. The web is something that permeates the platforms. From one point of view, this appears to make the web less than the operating system that someone happens to be using to access it. But in the same way that a chicken is an egg’s way of reproducing and a scientist is the universe’s way of observing itself, an operating system is the web’s way of providing access to itself.

Wait a minute, though …Cennydd didn’t actually compare Android to the web. He compared Android to the web browser. Like I’ve said before:

We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.

But Cennydd’s comparison does raise an interesting question: what is a web browser exactly? Answering that question probably requires an answer to the question: what is the web?

(At this point you might be thinking, “Ah, this is just semantics!” and you’d be right. Abandon ship here if you feel that way. But to describe something as “just semantics” is like pointing at all the written works in every library and saying “but they’re just words”, or taking in the entire trajectory of human civilisation and saying “but those are just ideas”. So yeah, this is “just” semantics.)

So what is the web? Well the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP. So FTP is not the web. Email is not the web. Gopher is not the web.

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

I also don’t think that it’s the file formats themselves that define the web. Don’t get me wrong: I love HTML …and I have nothing against CSS or JavaScript. But if HTML were to disappear, the tears I would weep would not be so much for the format itself, but for the two decades of culture that have been stored with it.

I was re-reading Weaving The Web and in that book, Tim Berners-Lee describes his surprise when people started using HTML to mark up their content. He expected HTML to be used for indices that would point to the URLs of the actual content, which could be in any file format (PDF, word processessing documents, or whatever). It turned out that HTML had just enough expressiveness and grokability to be used instead of those other formats.

So I certainly don’t consider anything that happens to be written using HTML, CSS, and JavaScript to automatically be a part of the web. I can open up a text editor and make an HTML document but as long as it sits on my computer instead of being addressable by a URL, it’s not part of the web. Likewise, a native app might be powered by CSS and JavaScript under the hood, but without a URL, it’s not part of the web.

Perhaps then, a web browser is something that can access URLs. Certainly in pretty much every example of a web browser throughout the web’s history, the URL has been front and centre: if the web were a platform, the URL bar would be its command line.

But, like the rise of HTML, the visibility of the URL in a web browser is an accident of history. It was added almost as an afterthought as a power-user feature: why would most people care what the URL of the content happens to be? It’s the content itself that matters, and you’d get to that content not by typing URLs, but by following hyperlinks.

There’s an argument to be made that, with the rise of search engines, the visibility of URLs has become less important. See, for example, the way that every advertisement for a website on the Tokyo subway doesn’t show a URL; it shows what to type into a search engine instead (and I’ve started seeing this in some TV adverts here in the UK too).

So a web browser that doesn’t expose the URLs of what it’s rendering is still a web browser.

Now imagine a browser that you install on your device that doesn’t expose URLs, but under the hood it is navigating between URLs using HTTP, and rendering the content (images, JavaScript, CSS, HTML, JSON, whatever). That’s a pretty good description of many native apps. There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

Like Paul said:

Monolithic browsers are not the only User Agent.

I was initially confused when Anna tweeted:

Reading the responses to @Cennydd’s tweet about designers needing to pay attention to Android. The web is fragmented. That’s our job.

I understood Cennydd’s point to be about native apps, not the web. But if, as I’ve just said, many native apps are in fact web browsers, does that mean that making native apps is a form of web development?

I don’t think so. I think making a native app has much more in common with making a web browser than it does with making a web site/app/thang. Certainly the work that Clearleft has done in this area felt that way: the Channel 4 News app is a browser for Channel 4 News; the Evo iPad app is a browser for Evo.

So if your job involves making browsers like those, then yes, you absolutely should be paying more attention to Android, for all the reasons that Cennydd suggests.

But if, like me, you have zero interest in making browsers—whether it’s a browser for Android, iOS, OS X, Windows, Blackberry, Linux, or NeXT—you should still be paying attention to Android because it’s just one of the many ways that people will be accessing the web.

It’s all too easy for us to fall into the trap of thinking that people will only be using traditional monolithic web browsers to access what we build. The truth is that our work will be accessed on the desktop, on mobile, and on tablets, but also on watches, on televisions, and sure, even fridges, but also on platforms that may not even have screens.

It’s certainly worth remembering that what you make will be viewed in the context of an artisanal browser. Like Jen says:

The “native apps are better” argument ignores the fact one of the most popular things to do in apps is read the web.

But just because we know that our work will be accessed on a whole range of devices and platforms doesn’t mean that we should optimise for those specific devices and platforms. That just won’t scale. The only sane future-friendly approach is to take a device-agnostic, platform-agnostic approach and deliver something that’s robust enough to work in this stunningly-wide range of browsers and user-agents (hint: progressive enhancement is your friend).

I completely agree with Cennydd: I think that ignoring Android is narrow-minded, blinkered and foolish …but I feel the same way about ignoring Windows, Blackberry, Nokia, or the Playstation. I also think it would be foolish to focus on any one of those platforms at the expense of others.

I love the fact that the web can be accessed on so many platforms and devices by so many different kinds of browsers. I only wish there more: more operating systems, more kinds of devices, more browsers. Any platform that allows more people to access the web is good with me. That’s why I, like Cennydd, welcome the rise of Android.

Stop seeing fragmentation. Start seeing diversity.

ABC

When I finally unveiled the redesigned and overhauled version of The Session at the end of 2012, it was the culmination of a lot of late nights and weekends. It was also a really great learning experience, one that I subsequently drew on to inform my An Event Apart presentation, The Long Web.

As part of that presentation, I give a little backstory on the ABC format. It’s a way of notating music using nothing more than ASCII text. It begins with some JSON-like metadata about the tune—its title, time signature, and key—followed by the notes of the tune itself—uppercase and lowercase letters denote different octaves, and numbers can be used to denote length:

X: 1
T: Cooley's
R: reel
M: 4/4
L: 1/8
K: Edor
|:D2|EBBA B2 EB|B2 AB dBAG|FDAD BDAD|FDAD dAFD|
EBBA B2 EB|B2 AB defg|afec dBAF|DEFD E2:|
|:gf|eB B2 efge|eB B2 gedB|A2 FA DAFA|A2 FA defg|
eB B2 eBgB|eB B2 defg|afec dBAF|DEFD E2:|

On The Session, a little bit of progressive enhancement produces a nice crisp SVG version of the sheet music at the user’s request (the non-JavaScript fallback is a server-rendered bitmap of the sheet music).

ABC notation dates back to the early nineties, a time of very limited bandwidth. Exchanging audio files or even images would have been prohibitively expensive. Having software installed on your machine that could convert ABC into sheet music or audio meant that people could share and exchange tunes through email, BBS, or even the then-fledgling World Wide Web.

In today’s world of relatively fast connections, ABC’s usefulness might seemed lessened. But in fact, it’s just as popular as it ever was. People have become used to writing (and even sight-reading) the format, and it has all the resilience that comes with being a text format; easily editable, and human-readable. It’s still the format that people use to submit new tune settings to The Session.

A little while back, I came upon another advantage of the ABC format, one that I had never previously thought of…

The Session has a wide range of users, of all ages, from all over the world, from all walks of life, using all sorts of browsers. I do my best to make sure that the site works for just about any kind of user-agent (while still providing plenty of enhancements for the most modern browsers). That includes screen readers. Some active members of The Session happen to be blind.

One of those screen-reader users got in touch with me shortly after joining to ask me to explain what ABC was all about. I pointed them at some explanatory links. Once the format “clicked” with them, they got quite enthused. They pointed out that if the sheet music were only available as an image, it would mean very little to them. But by providing the ABC notation alongside the sheet music, they could read the music note-for-note.

That’s when it struck me that ABC notation is effectively alt text for sheet music!

There’s one little thing that slightly irks me though. The ABC notation should be read out one letter at a time. But screen readers use a kind of fuzzy logic to figure out whether a set of characters should be spoken as a word:

Screen readers try to pronounce acronyms and nonsensical words if they have sufficient vowels/consonants to be pronounceable; otherwise, they spell out the letters. For example, NASA is pronounced as a word, whereas NSF is pronounced as “N. S. F.” The acronym URL is pronounced “earl,” even though most humans say “U. R. L.” The acronym SQL is not pronounced “sequel” by screen readers even though some humans pronounce it that way; screen readers say “S. Q. L.”

It’s not a big deal, and the screen reader user can explicitly request that a word be spoken letter by letter:

Screen reader users can pause if they didn’t understand a word, and go back to listen to it; they can even have the screen reader read words letter by letter. When reading words letter by letter, JAWS distinguishes between upper case and lower case letters by shouting/emphasizing the upper case letters.

But still …I wish there were some way that I could mark up the ABC notation so that a screen reader would know that it should be read letter by letter. I’ve looked into using abbr, but that offers no guarantees: if the string looks like a word, it will still be spoken as a word. It doesn’t look there’s any ARIA settings for this use-case either.

So if any accessibility experts out there know of something I’m missing, please let me know.

Update: I’ve added an aural CSS declaration of speak: spell-out (thanks to Martijn van der Ven for the tip), although I think the browser support is still pretty non-existent. Any other ideas?

Cheap’n’cheerful

I occasionally get sent some devices for the Clearleft device lab (which reminds me: thank you to whoever at Blackberry sent over the “Dev Alpha B” Blackberry 10).

Last week, an interesting little device showed up.

Cheap Android device

I had no idea who sent it. Was it a gaming device ordered by Anna?

The packaging was all in Chinese. Perhaps some foreign hackers were attempting to infiltrate our network through some clever social engineering.

It turns out that Rich had ordered it, having heard about it from Chris Heathcote who mentioned the device during his UX London talk.

It’s an S18 Mini Pad. You can pick one up for about £30. For that price, as Chris pointed out, you could just use it as an alarm clock (and it does indeed have an alarm clock app). But it’s also a touchscreen device with WiFi and a web browser …a really good web browser: it comes with Chrome. It’s an Android 4 device.

It has all sorts of issues. The touchscreen is pretty crap, for example. But considering the price, it’s really quite remarkable.

We’ve got to the point where all the individual pieces—WiFi, touchscreen, web browser, operating system—can be thrown together into one device that can be sold for around the thirty quid mark. And this is without any phone company subsidies.

Crap as it is, this device really excites me. A cheap mobile web-enabled device …I find that so much more thrilling than any Apple keynote.

The main issue

The inclusion of a main element in HTML has been the subject of debate for quite a while now. After Steve outlined the use cases, the element has now landed in the draft of the W3C spec. It isn’t in the WHATWG spec yet, but seeing as it is being shipped in browsers, it’s only a matter of time.

But I have some qualms about the implementation details so I fired off an email to the HTML working group with my concerns about the main element as it is current specced.

I’m curious as to why its use is specifically scoped to the body element rather than any kind of sectioning content:

Authors must not include more than one main element in a document.

I get the rationale behind the main element: it plugs a gap in the overlap between native semantics and ARIA landmark roles (namely role="main"). But if you look at the other elements that map to ARIA roles (header, footer, nav), those elements can be used multiple times in a single document by being scoped to their sectioning content ancestor.

Let’s say, for example, that I have a document like this, containing two header elements:

<body>
 <header>Page header</header>
 Page main content starts here
 <section>
  <header>Section header</header>
  Section main content
 </section>
 Page main content finishes here
</body>

…only the first header element (the one that’s scoped to the body element) will correspond to role="banner", right?

Similarly, in this example, only the final footer element will correspond to role=”contentinfo”:

<body>
 <header>Page header</header>
 Page main content starts here
 <section>
  <header>Section header</header>
  Section main content
  <footer>Section footer</footer>
 </section>
 Page main content finishes here
 <footer>Page footer</footer>
</body>

So what I don’t understand is why we can’t have the main element work the same way i.e. scope it to its sectioning content ancestor so that only the main element that is scoped to the body element will correspond to role="main":

<body>
 <header>Page header</header>
 <main>
  Page main content starts here
  <section>
   <header>Section header</header>
   <main>Section main content</main>
   <footer>Section footer</footer>
  </section>
  Page main content finishes here
 </main>
 <footer>Page footer</footer>
</body>

Here are the corresponding ARIA roles:

<body>
 <header>Page header</header> <!-- role="banner" -->
 <main>Page main content</main> <!-- role="main" -->
 <footer>Page footer</footer> <!-- role="contentinfo" -->
</body>

Why not allow the main element to exist within sectioning content (section, article, nav, aside) the same as header and footer?

<section>
 <header>Section header</header> <!-- no corresponding role -->
 <main>Section main content</main> <!-- no corresponding role -->
 <footer>Section footer</footer> <!-- no corresponding role -->
</section>

Deciding how to treat the main element would then be the same as header and footer. Here’s what the spec says about the scope of footers:

When the nearest ancestor sectioning content or sectioning root element is the body element, then it applies to the whole page.

I could easily imagine the same principle being applied to the main element. From an implementation viewpoint, this is already what browsers have to do with header and footer. Wouldn’t it be just as simple to apply the same parsing to the main element?

It seems a shame to introduce a new element that can only be used zero or one time in a document …I think the head and body elements are the only other elements with this restriction (and I believe the title element is the only element that is used precisely once per document).

It would be handy to be able to explicitly mark up the main content of an article or an aside—for exactly the same reasons that it would be useful to mark up the main content of a document.

So why not?

What technology wants

Technology enabled Sarah Churman to hear for the first time.

enabled her to capture that moment.

Networked technology enabled her to share that moment with the world.

enabled me to share it with you.

Hashcloud

Hashbangs. Yes, again. This is important, dammit!

When the topic first surfaced, prompted by Mike’s post on the subject, there was a lot of discussion. For a great impartial round-up, I highly recommend two posts by James Aylett:

There seems to be a general concensus that hashbang URLs are bad. Even those defending the practice portray them as a necessary evil. That is, once a better solution is viable—like the HTML5 History API—then there will no longer be any need for #! in URLs. I’m certain that it’s a matter of when, not if Twitter switches over.

But even then, that won’t be the end of the story.

Dan Webb has written a superb long-zoom view on the danger that hashbangs pose to the web:

There’s no such thing as a temporary fix when it comes to URLs. If you introduce a change to your URL scheme you are stuck with it for the forseeable future. You may internally change your links to fit your new URL scheme but you have no control over the rest of the web that links to your content.

Therein lies the rub. Even if—nay when—Twitter switch over to proper URLs, there will still be many, many blog posts and other documents linking to individual tweets …and each of those links will contain #!. That means that Twitter must make sure that their home page maintains a client-side routing mechanism for inbound hashbang links (remember, the server sees nothing after the # character—the only way to maintain these redirects is with JavaScript).

As Paul put it in such a wonderfully pictorial way, the web is agreement. Hacks like hashbang URLs—and URL shorteners—weaken that agreement.

Orientation and scale

Paul Irish, Divya Manian and Shi Chuan launched Mobile Boilerplate recently—a mobile companion site to HTML5 Boilerplate.

There’s some good stuff in there but I was a little surprised to see that the meta viewport element included values for minimum-scale=1.0, maximum-scale=1.0, user-scalable=no:

Setting user-scalable=no is pretty much the same as setting minimum-scale=1.0, maximum-scale=1.0. In any case, I’m not keen on it. Like Roger, I don’t think we should take away the user’s right to pinch and zoom to make content larger. That’s why my usual viewport declaration is:

Yes, I know that most native apps don’t allow you to zoom but I see no reason to replicate that failing on the web.

But there’s a problem. Allowing users to scale content for comfort would be fine if it weren’t for a bug in Mobile Safari:

When the meta viewport tag is set to content="width=device-width,initial-scale=1", or any value that allows user-scaling, changing the device to landscape orientation causes the page to scale larger than 1.0. As a result, a portion of the page is cropped off the right, and the user must double-tap (sometimes more than once) to get the page to zoom properly into view.

This is really annoying so Shi Chuan set about fixing the problem.

His initial solution was to keep minimum-scale=1.0, maximum-scale=1.0 in the meta viewport element but then to change it using JavaScript once the user initiates a gesture (the gesturestart event is triggered as soon as two fingers are on the screen). At the point, the content attribute of the meta viewport element gets updated to read minimum-scale=0.25, maximum-scale=1.6, the default values:

var metas = document.getElementsByTagName('meta');
var i;
if (navigator.userAgent.match(/iPhone/i)) {
  document.addEventListener("gesturestart", gestureStart, false);
  function gestureStart() {
    for (i=0; i<metas.length; i++) {
      if (metas[i].name == "viewport") {
        metas[i].content = "width=device-width, minimum-scale=0.25, maximum-scale=1.6";
      }
    }
  }
}

That works nicely but I wasn’t too keen on the dependency between the markup and the script. If, for whatever reason, the script doesn’t get executed, users are stuck with an unzoomable page.

I suggested that the script should also set the initial value to minimum-scale=1.0, maximum-scale=1.0:

var metas = document.getElementsByTagName('meta');
var i;
if (navigator.userAgent.match(/iPhone/i)) {
  for (i=0; i<metas.length; i++) {
    if (metas[i].name == "viewport") {
      metas[i].content = "width=device-width, minimum-scale=1.0, maximum-scale=1.0";
    }
  }
  document.addEventListener("gesturestart", gestureStart, false);
}
function gestureStart() {
  for (i=0; i<metas.length; i++) {
    if (metas[i].name == "viewport") {
      metas[i].content = "width=device-width, minimum-scale=0.25, maximum-scale=1.6";
    }
  }
}

Now the markup still contains the robust accessible default:

…while the script takes care of initially setting the scale values and also updating them when a gesture is detected. Here’s what’s happening:

  1. By default, the page is scaleable because the initial meta viewport declaration doesn’t set a minimum-scale or maximum-scale.
  2. Once the script loads, the page is no longer scalable because both minimum-scale and maximum-scale have been set to 1.0. If the device is switched from portrait to landscape, the resizing bug won’t be triggered because scaling is disabled.
  3. When the gesturestart event is detected—indicating that the user might be trying to scale the page—the minimum-scale and maximum-scale values are updated to allow scaling. At this point, if the device is switched from portrait to landscape, the resizing bug will occur because the page is now scaleable.

Jason Weaver points out that you should probably detect for iPad too. That’s a pretty straightforward update:

if (navigator.userAgent.match(/iPhone/i) || navigator.userAgent.match(/iPad/i))

Mathias Bynens updated the code to use querySelectorAll which is supported in Mobile Safari. Here’s the code I’m currently using:

if (navigator.userAgent.match(/iPhone/i) || navigator.userAgent.match(/iPad/i)) {
  var viewportmeta = document.querySelector('meta[name="viewport"]');
  if (viewportmeta) {
    viewportmeta.content = 'width=device-width, minimum-scale=1.0, maximum-scale=1.0';
    document.body.addEventListener('gesturestart', function() {
      viewportmeta.content = 'width=device-width, minimum-scale=0.25, maximum-scale=1.6';
    }, false);
  }
}

You can try it out on Huffduffer, Salter Cane, Principia Gastronomica and right here on Adactio.

Right now there’s still a little sluggishness between the initial pinch-zoom gesture and the scaling; the scale values (0.25 - 1.6) don’t seem to take effect immediately. A second pinch-zoom gesture is often required. If you have any ideas for improving the event capturing and propagation, dive in there.

Update: the bug has been fixed in iOS 6.

Going Postel

I wrote a little while back about my feelings on hash-bang URLs:

I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness…

Fortunately, Mike Davies is more articulate than I. He’s written a detailed account of breaking the web with hash-bangs.

It would appear that hash-bang usage is on the rise, despite the fact that it was never intended as a long-term solution. Instead, the pattern (or anti-pattern) was intended as a last resort for crawling Ajax-obfuscated content:

So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

Mike goes into detail on the Gawker outage that was a direct result of its “sites” being little more than single pages that require JavaScript to access anything.

I’m always surprised when I come across as site that deliberately chooses to make its content harder to access.

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of .

This is no accident. The web stack is rooted in . If you serve an HTML document to a browser, and that document contains some tags or attributes that the browser doesn’t understand, the browser will simply ignore them and render the document as best it can. If you supply a style sheet that contains a selector or rule that the browser doesn’t recognise, it will simply pass it over and continue rendering.

In fact, the most brittle part of the stack is JavaScript. While it’s far looser and more forgiving than many other programming languages, it’s still a programming language and that means that a simple typo could potentially cause an entire script to fail in a browser.

That’s why I’m so surprised that any front-end engineer would knowingly choose to swap out a solid declarative foundation like HTML for a more brittle scripting language. Or, as Simon put it:

Gizmodo launches redesign, is no longer a website (try visiting with JS disabled): http://gizmodo.com/

Read Mike’s article, re-read this article on URL design and listen to what John Resig has to say in this interview .

Landmark roles

David made a comment on Twitter about some markup he was working on:

Feels dirty setting id’s on main HTML5 page header and footer, but overriding inheritance they cause seems needlessly laborious.

I know the feeling. I don’t like using IDs at all, unless I want part of a document to be addressable through the fragment identifier portion of the URL. While I think it’s desirable to use the id attribute to create in-document permalinks, I don’t think it’s desirable to use the id attribute just as a styling hook. Its high specificity may seem a blessing but, in my experience, it quickly leads to duplicated CSS. IDs are often used as a substitute for understanding the cascade.

Nicole feels the same way about ID selectors, and she knows a thing or two about writing efficient CSS.

Back to David’s dilemma. Let’s say you’re targetting header and footer elements used throughout your document in sections, articles, etc. All you need to use is an element selector:

header {
// regular header styles
}

But what about the document-level header and footer? They’re probably styled very differently from the headings of sections and articles.

You could try using a child selector:

body > header

But there’s no guarantee that the document header will be a direct child of the body element. Hence the use of the id attribute—header id="ID":

header#ID {
// page header styles
}

There is another way. In HTML5 you can, for the first time, use ARIA roles and still have a valid document. ARIA landmark roles add an extra layer of semantics onto your document, distinguishing them as navigational landmarks for assistive technology.

Some of these landmark roles—like IDs—are to be used just once per document:

Within any document or application, the author SHOULD mark no more than one element with

That’s useful for two reasons. One, the existence of role="main" means it is not necessary for HTML5 to provide an element whose only semantic is to say “this is the main content of the document.”

A lot of people have asked for such an element in HTML5. “We’ve got header, footer and aside,” they say. “Why not maincontent too?”

But whereas header, footer and aside can and should be used multiple times per document, a maincontent element would, by necessity, only be used once per document. There are very few elements in HTML that are like that: the html element itself, the title element, head and body (contrary to popular opinion—likely spread by SEO witch-doctors—the h1 element can be used more than once per document).

Now the desired functionality of semantically expressing “this is the main content of the document” can be achieved, not with an element, but with an attribute value: role="main".

The rough skeleton of your document might look like this:

<header role="banner"></header>
<div role="main"></div>
<footer role="contentinfo"></footer>

Now you can see the second advantage of these one-off role values. You can use an attribute selector in your CSS to target those specific elements:

header[role="banner"] {
// page header styles
}

Voila! You can over-ride the default styling of headers and footers without resorting to the heavy-handed specificity of the ID selector.

And, of course, you get the accessibility benefits too. ARIA roles are supported in JAWS, NVDA and Voiceover

One web

I was in Dublin recently to give a little talk at the 24 Hour Universal Design Challenge 2010. It was an interesting opportunity to present my own perspective on web design to an audience that consisted not just of web designers, but designers from many different fields.

I gave an overview of the past, present and future of web design as seen from where I’m standing. You can take a look at the slides but my usual caveat applies: they won’t make much sense out of context. There is, however, a transcript of the talk on the way (the whole thing was being captioned live on the day).

Towards the end of my spiel, I pointed to Tim Berners-Lee’s recent excellent article in Scientific American, Long Live the Web: A Call for Continued Open Standards and Neutrality:

The primary design principle underlying the Web’s usefulness and growth is universality. When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, software they use or human language they speak and regardless of whether they have a wired or wireless Internet connection. The Web should be usable by people with disabilities. It must work with any form of information, be it a document or a point of data, and information of any quality—from a silly tweet to a scholarly paper. And it should be accessible from any kind of hardware that can connect to the Internet: stationary or mobile, small screen or large.

We’re at an interesting crossroads right now. Recent developments in areas like performance and responsive design means that we can realistically pursue that vision of serving up content at a URL to everyone to the best ability of their device. At the same time, the opposite approach—creating multiple, tailored URLs—is currently a popular technique.

At the most egregious and nefarious end of the spectrum, there’s Google’s disgusting backtracking on net neutrality which hinges on a central conceit that spits in the face of universality:

…we both recognize that wireless broadband is different from the traditional wireline world, in part because the mobile marketplace is more competitive and changing rapidly. In recognition of the still-nascent nature of the wireless broadband marketplace, under this proposal we would not now apply most of the wireline principles to wireless…

That’s the fat end of the wedge: literally having a different set of rules for one group of users based on something as arbitrary as how they are connecting to the network.

Meanwhile, over on the thin end of the wedge, there’s the fashion for serving up the same content at different URLs to different devices (often segregated within a subdomain like m. or mobile.—still better than the crack-smoking-inspired .mobi idea).

It’s not a bad technique at all, and it has served the web reasonably well as we collectively try to get our heads around the expanded browser and device landscape of recent years …although some of us cringe at the inherent reliance on browser-sniffing. At least the best practice in this area is to always offer a link back to the “regular” site.

Still, although the practice of splintering up the same content to separate URLs and devices has been a useful interim step, it just doesn’t scale. It’s also unnecessary.

Most of the time, creating a separate mobile website is simply a cop-out.

Hear me out.

First of all, I said “most of the time.” Maybe Garrett is onto something when he says:

It seems responsive pages are best for content while dedicated mobile pages are best for interactive applications. Discuss.

Although, as I pointed out in my brief list of false dichotomies, there’s no clear delineation between documents and applications (just as there’s no longer any clear delineation between desktop and mobile).

Still, let’s assume we’re talking about content-based sites. Segregating the same content into different URLs seems like a lot of work (quite apart from violating the principle of universality) if all you’re going to do is remove some crud that isn’t necessary in the first place.

As an example, here’s an article from The Guardian’s mobile site and here’s the same article as served up on the www. subdomain.

Leaving aside the way that the width is inexplicably set to a fixed number of pixels, it’s a really well-executed mobile site. The core content is presented very nicely. The cruft is gone.

But then, if that cruft is unnecessary, why serve it up on the “desktop” version? I can see how it might seem like a waste not to use extra screen space and bandwidth if it’s available, but I’d love see an approach that’s truly based on progressive enhancement. Begin with the basic content; structure it to best fit the screen using media queries or some other form of feature detection (not browser detection); pull in extra content for large-screen user-agents, perhaps using some form of lazy loading. To put it in semantic terms, if the content could be marked up as an aside, it may be a prime candidate for lazy loading based on device capability:

The aside element represents a section of a page that consists of content that is tangentially related to the content around the aside element, and which could be considered separate from that content.

I’m being unfair on The Guardian site …and most content-based sites with a similar strategy. Almost every site that has an accompanying mobile version—Twitter, Flickr, Wikipedia, BBC—began life when the desktop was very much in the ascendency. If those sites were being built today, they might well choose a more responsive, scalable solution.

It’s very, very hard to change an entire existing site. There’s a lot of inertia to battle against. Also, let’s face it: most design processes are not suited to solving problems in a device-independent, content-out way. It’s going to be challenging for large organisations to adjust to this way of thinking. It’s going to be equally challenging for agencies to sell this approach to clients—although I feel Clearleft may have a bit of an advantage in having designers like Paul who really get it. I think a lot of the innovation in this area will come from more nimble quarters: personal projects and small startups, most likely.

37 Signals recently documented some of their experiments with responsive design. As it turned out, they had a relatively easy time of it because they were already using a flexible approach to layout:

The key to making it easy was that the layout was already liquid, so optimizing it for small screens meant collapsing a few margins to maximize space and tweaking the sidebar layout in the cases where the screen is too narrow to show two columns.

In the comments, James Pearce, who is not a fan of responsiveness, was quick to cry foul:

I think you should stress that building a good mobile site or app probably takes more effort than flowing a desktop page onto a narrower piece of glass. The mobile user on the other side will quite possibly want to do different things to their desktop brethren, and deserves more than some pixel shuffling.

But the very next comment gets to the heart of why this well-intentioned approach can be so destructive:

A lot of mobile sites I’ve seen are dumbed down versions of the full thing, which is really annoying when you find that the feature you want isn’t there. The design here is the same site adapted to different screens, meaning the end product doesn’t lose any functionality. I think this is much better than making decisions for your users as to what they will and won’t want to see on their mobile phone.

I concur. I think there’s a real danger in attempting to do the right thing by denying users access to content and functionality “for their own good.” It’s patronising and condescending to assume you know the wants and needs of a visitor to your site based purely on their device.

The most commonly-heard criticism of serving up the same website to everyone is that the existing pages are too large, either in size or content. I agree. Far too many URLs resolve to bloated pages locked to a single-width layout. But the solution is not to make leaner, faster pages just for mobile users; the answer is to make leaner, faster pages for everybody.

Even the brilliant Bryan Rieger, who is doing some of the best responsive web design work on the planet with the Yiibu site, still talks about optimising only for certain users in his otherwise-excellent presentation, The End of Unlimited Bandwidth.

When I was reading the W3C’s Mobile Web Best Practices, I was struck by how much of it is sensible advice for web development in general, rather than web development specifically for mobile.

This is why I’m saying that most of the time, creating a separate mobile website is simply a cop-out. It’s a tacit acknowledgement that the regular “desktop” site is beyond help. The cop-out is creating an optimised experience for one subset of users while abandoning others to their bloated fate.

A few years back, there was a trend to provide separate text-only “accessible” websites, effectively ghettoising some users. Nowadays, it’s clear that universal design is a more inclusive, more maintainable approach. I hope that the current ghettoisation of mobile users will also end.

I’m with Team Timbo. One web.

A brief list of false dichotomies

In the world of web development, there are many choices that are commonly presented as true or false, black and white, Boolean, binary values, when in fact they exist in a grey goo of quantum uncertainty. Here are just three of them…

Accessible and inaccessible

Oh, would that it were so simple! If accessibility were a “feature” that could be controlled with the flick of a switch, or the completion of a checklist, front-end web development would be a whole lot easier. Instead it’s a tricky area with no off-the-shelf solutions and where the answer to almost every question is “it depends.” Solving an accessibility issue for one set of users may well create problems for another. Like I said: tricky.

Documents and applications

Remember when we were all publishing documents on the web, but then there was that all-changing event and then we all started making web apps instead? No? Me neither. In fact, I have yet to hear a definition of what exactly constitutes a web app. If it means any URL that allows the user to interact with the contents, then any document with the slightest smattering of JavaScript must be considered an application. Yet even on the desktop, applications like email clients, graphics programs and word processors are based around the idea of creating, editing and deleting documents.

Perhaps a web app, like obscenity, cannot be defined but can only be recognised. I can point to Gmail and say “that’s a web app.” I can point to a blog post and say “that’s a document.” But what about a Wikipedia article? It’s a document, but one that I or anyone else can edit. What about Twitter? Is it a collection of documents of fewer than 140 characters, or is it a publishing tool?

The truth is that these sites occupy a sliding scale from document to application. Rather than pin them to either end of that scale, I’m just going to carry on calling them websites.

Desktop and mobile

The term “mobile phone” works better than “cell phone” because it defines a phone by its usage rather than a particular technology. But it turns out that the “phone” part is just as technologically rigid. Hence the rise of the term “mobile device” to cover all sorts of compact Turing machines, some of which just happen to be phones.

In web development, we speak now of “designing for mobile,” but what does that mean? Is it literally the context of not being stationary that makes the difference? If so, I should be served up a different experience when I use my portable device in the street to when I use the same device sitting down in the comfort of my home or office.

Is it the bandwidth that makes the difference? That would imply that non-mobile devices don’t suffer from network scarcity. Nothing could be further from the truth. Performance is equally important on the desktop. It may even be more important. While a user may expect a certain amount of latency when they’re out and about, they are going to have little patience for any site that responds in a less than snappy manner when they’re at home connected with a fat pipe.

Is it screen size that matters? That would make my iPod Touch a mobile device, even though whenever I’m surfing the web on it, I am doing so over WiFi rather than Edge, 3G, or any other narrow network. What about an iPad? Or a netbook? When I was first making websites, the most common monitor resolution was 640 by 480 pixels. Would those monitors today be treated as mobile devices simply because of the dimensions of the screen?

I’ll leave the last word on this to Joaquin Lippincott who wrote this in a follow-up comment to his excellent article, Stop! You are doing mobile wrong!:

Devices really should be treated as a spectrum (based on capabilities) rather than put into a mobile vs. desktop bin.