Tags: standards



Taking an online book offline

Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.

There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.

Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.

I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.

Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to Let’s Encrypt Certbot—but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.

In this case, the Service Worker is using a straightforward bit of logic:

  • On installation, cache absolutely everything: HTML, CSS, images.
  • When anything is requested, grab it from the cache.
  • If it isn’t in the cache, try the network.
  • If the network doesn’t work, show an offline page (or image).

Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.

Which brings me back to the way AppCache is being deprecated…

The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.

In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…

At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”

You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:

In that same Mozilla thread, Soledad echoes Tom’s point:

As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.

I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…

Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.

I was genuinely shocked by this:

Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use display: browser to allow people to share URLs.

Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.

There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.

I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.

I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.

A brief history of the World Wide Web by web developers

The web will be so much better when we have images.

The web will be so much better when we can use more than 216 colours.

The web will be so much better when we have Cascading Style Sheets.

The web will be so much better when we have Cascading Style Sheets that work the same way in different browsers.

The web will be so much better when we have JavaScript.

The web will be so much better when we have JavaScript that works the same way in different browsers.

The web will be so much better when people stop using Netscape Navigator 4.

The web will be so much better when people stop using Internet Explorer 6.

The web will be so much better when we can access it on our mobile phones.

The web will be so much better when we have native video support.

The web will be so much better when we have native video support that works the same way in different browsers.

The web will be so much better when Flash dies.

The web will be so much better when we have more than a handful of fonts.

The web will be so much better when nobody is running Windows XP anymore.

The web will be so much better when nobody is running Android 2 anymore.

The web will be so much better when we have smooth animations.

The web will be so much better when websites can still work offline.

The web will be so much better when we get push notifications.

The web will be so much better when…

Everything is amazing right now and nobody’s happy.

The web on my phone

It’s funny how times have changed. Remember back in the 90s when Microsoft—quite rightly—lost an anti-trust case? They were accused of abusing their monopolistic position in the OS world to get an unfair advantage in the browser world. By bundling a copy of Internet Explorer with every copy of Windows, they were able to crush the competition from Netscape.

Mind you, it was still possible to install a Netscape browser on a Windows machine. Could you imagine if Microsoft had tried to make that impossible? There would’ve been hell to pay! They wouldn’t have had a legal leg to stand on.

Yet here we are two decades later and that’s exactly what an Operating System vendor is doing. The Operating System is iOS. It’s impossible to install a non-Apple browser onto an Apple mobile computer. For some reason, the fact that it’s a mobile device (iPhone, iPad) makes it different from a desktop-bound device running OS X. Very odd considering they’re all computers.

“But”, I hear you say, “What about Chrome for iOS? Firefox for iOS? Opera for iOS?”

Chrome for iOS is not Chrome. Firefox for iOS is not Firefox. Opera for iOS is not Opera. They are all using WebKit. They’re effectively the same as Mobile Safari, just with different skins.

But there won’t be any anti-trust case here.

I think it’s a real shame. Partly, I think it’s a shame because as a developer, I see an Operating System being let down by its browser. But mostly, I think it’s a shame because I use an iPhone and I’m being let down by its browser.

It’s kind of ironic, because when the iPhone first launched, it was all about the web apps. Remember, there was no App Store for the first year of the iPhone’s life. If you wanted to build an app, you had to use web technologies. Apple were ahead of their time. Alas, the web technologies weren’t quite up to the task back in 2007. These days, though, there are web technologies landing in browsers that are truly game-changing.

In case you hadn’t noticed, I’m very excited about Service Workers. It’s doubly exciting to see the efforts the Chrome on Android team are making to make the web a first-class citizen. As Remy put it:

If I add this app to my home screen, it will work when I open it.

I’d like to be able to use Chrome, Firefox, or Opera on my iPhone—real Chrome, real Firefox, or real Opera; not a skinned version of Safari. Right now the only way for me to switch browsers is to switch phones. Switching phones is a pain in the ass, but I’m genuinely considering it.

Whereas I’m all talk, Henrik has taken action. Like me, he doesn’t actually care about the Operating System. He cares about the browser:

Android itself bores me, honestly. There’s nothing all that terribly new or exciting here.’

save one very important detail…


That’s true for now. The pole position for which browser is “best” is bound to change over time. The point is that locking me into one particular browser on my phone doesn’t sit right with me. It’s not very …webby.

I’m sure that Apple are not quaking in their boots at the thought of myself or Henrik switching phones. We are minuscule canaries in a very niche mine.

But what should give Apple pause for thought is the user experience they can offer for using the web. If they gain a reputation for providing a sub-par web experience compared to the competition, then maybe they’ll have to make the web a first-class citizen.

If I want to work towards that, switching phones probably won’t help. But what might help is following Alex’s advice in his answer to the question “What do we do about Safari?”:

What we do about Safari is we make websites amazing …and then they can’t not implement.

I’ll be doing that here on adactio, over on The Session (and Huffduffer when I get around to overhauling it), making progressively enhanced, accessible, offline-first, performant websites.

I’ll also be doing it at Clearleft. If you work at an organisation that wants a progressively-enhanced, accessible, offline-first, performant website, we should talk.

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.


It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.


No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.


If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.


In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:


Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>


That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.


The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"


When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.


Contrary to popular belief, web standards aren’t created by a shadowy cabal and then handed down to browser makers to implement. Quite the opposite. Browser makers come together in standards bodies and try to come to an agreement about how to collectively create and implement standards. That keeps them very busy. They don’t tend to get out very often, but when they do, the browser/standards makers have one message for developers: “We want to make your life better, so tell us what you want and that’s what we’ll work on!”

In practice, this turns out not to be the case.

Case in point: responsive images. For years, this was the number one feature that developers were crying out for. And yet, the standards bodies—and, therefore, browser makers—dragged their heels. First they denied that it was even a problem worth solving. Then they said it was simply too hard. Eventually, thanks to the herculean efforts of the Responsive Images Community Group, the browser makers finally began to work on what developers had been begging for.

Now that same community group is representing the majority of developers once again. Element queries—or container queries—have been top of the wish list of working devs for quite a while now. The response from browser makers is the same as it was for responsive images. They say it’s simply too hard.

Here’s a third example: web components. There are many moving parts to web components, but one of the most exciting to developers who care about accessibility and backwards-compatibility is the idea of extending existing elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

So instead of having to create a whole new element from scratch like this:

<taco-button>Click me!</taco-button>

…you could piggy-back on an existing element like this:

<button is="taco-button">Click me!</button>

That way, you get the best of both worlds: the native semantics of button topped with all the enhancements you want to add with your taco-button custom element. Brilliant! Github is using this to extend the time element, for example.

I’m not wedded to the is syntax, but I do think it’s vital that there is some declarative mechanism to extend existing elements instead of creating every custom element from scratch each time.

Now it looks like that’s the bit of web components that isn’t going to make the cut. Why? Because browser makers say it’s simply too hard.

As Bruce points out, this is in direct conflict with the design principles that are supposed to be driving the creation and implementation of web standards.

It probably wouldn’t bother me so much except that browser makers still trot out the party line, “We want to hear what developers want!” Their actions demonstrate that this claim is somewhat hollow.

I don’t hold out much hope that we’ll get the ability to extend existing elements for web components. I think we can still find ways to piggy-back on existing semantics, but it’s going to take more work:

<taco-button><button>Click me!</button></taco-button>

That isn’t very elegant and I can foresee a lot of trickiness trying to sift the fallback content (the button tags) from the actual content (the “Click me!” text).

But I guess that’s what we’ll be stuck with. The alternative is simply too hard.

Cerf rocks

After I wrote about digital preservation and the need to save everything, not just the so-called “important” stuff, Jason wrote a lovely piece with his own thoughts on the matter:

In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.

In a timely coincidence, Vint Cerf also spoke about the importance of digital preservation:

When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.

He warns of the dangers of rapidly-obsoleting file formats:

We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.

It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.

I have to say, I just love listening to him talk. He’s so smooth. I’m sure that the character of The Architect from The Matrix Reloaded is modelled on him.

Vint Cerf knows a thing or two about long-term thinking when it comes to data formats. He has written many RFCs for the IETF (my favourite being RFC 2468). Back in 1969, he wrote RFC 20, proposing the ASCII format for network interchange. If you’ve ever used the keypress event in JavaScript and wondered why, for example, the number 13 corresponds to a carriage return, this is where all those numbers come from.

Last month, over 45 years after the RFC’s original publication, it became an official standard.

So when Vint Cerf warns about the dangers of digitising into file formats that could become unreadable, I think we should pay attention to him.


I’ve said it before, but I’m going to reiterate my conflicted feelings about Web Components:

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.

There are broadly two ways that they could potentially be used:

  1. Web Components are used by developers to incrementally add more powerful elements to their websites. This evolutionary approach feels very much in line with the thinking behind the extensible web manifesto. Or:
  2. Web Components are used by developers as a monolithic platform, much like Angular or Ember is used today. The end user either gets everything or they get nothing.

The second scenario is a much more revolutionary approach—sweep aside the web that has come before, and usher in a new golden age of Web Components. Personally, I’m not comfortable with that kind of year-zero thinking. I prefer evolution over revolution:

Revolutions sometimes change the world to the better. Most often, however, it is better to evolve an existing design rather than throwing it away. This way, authors don’t have to learn new models and content will live longer. Specifically, this means that one should prefer to design features so that old content can take advantage of new features without having to make unrelated changes. And implementations should be able to add new features to existing code, rather than having to develop whole separate modes.

The evolutionary model is exemplified by the design of HTML 5.

The revolutionary model is exemplified by the design of XHTML 2.

I really hope that the Web Components model goes down the first route.

Up until recently, my inner Web Components pendulum was swinging towards the hopeful end of my spectrum of anticipation. That was mainly driven by the ability of custom elements to extend existing HTML elements.

So, for example, instead of creating a new element like this:


…you can piggyback off the existing semantics of the button element like this:

<button is="taco-button">...</button>

For a real-world example, see Github’s use of <time is="time-ago">.

I wrote about creating responsible Web Components:

That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Peter Gasston also has a great post on best practice for creating custom elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

But now it looks like this superpower of custom elements is being nipped in the bud:

It also does not address subclassing normal elements. Again, while that seems desirable the current ideas are not attractive long term solutions. Punting on it in order to ship a v1 available everywhere seems preferable.

Now, I’m not particularly wedded to the syntax of using the is="" attribute to extend existing elements …but I do think that the ability to extend existing elements declaratively is vital. I’m not alone, although I may very well be in the minority.

Bruce has outlined some use cases and Steve Faulkner has enumerated the benefits of declarative extensibility:

I think being able to extend existing elements has potential value to developers far beyond accessibility (it just so happens that accessibility is helped a lot by re-use of existing HTML features.)

Bruce concurs:

Like Steve, I’ve no particularly affection (or enmity) towards the <input type="radio" is="luscious-radio"> syntax. But I’d like to know, if it’s dropped, how progressive enhancement can be achieved so we don’t lock out users of browsers that don’t have web components capabilities, JavaScript disabled or proxy browsers. If there is a concrete plan, please point me to it. If there isn’t, it’s irresponsible to drop a method that we can see working in the example above with nothing else to replace it.

He adds:

I also have a niggling worry that this may affect the uptake of web components.

I think he’s absolutely right. I think there are many developers out there in a similar position to me, uncertain exactly what to make of this new technology. I was looking forward to getting really stuck into Web Components and figuring out ways of creating powerful little extensions that I could start using now. But if Web Components turn out to be an all-or-nothing technology—a “platform”, if you will—then I will not only not be using them, I’ll be actively arguing against their use.

I really hope that doesn’t happen, but I must admit I’m not hopeful—my inner pendulum has swung firmly back towards the nervous end of my anticipation spectrum. That’s because I’m getting the distinct impression that the priorities being considered for Web Components are those of JavaScript framework creators, rather than web developers looking to add incremental improvements while maintaining backward compatibility.

If that’s the case, then Web Components will be made in the image of existing monolithic MVC frameworks that require JavaScript to do anything, even rendering content. To me, that’s a dystopian vision, one I can’t get behind.

Responsible Web Components

Bruce has written a great article called On the accessibility of web components. Again. In it, he takes issue with the tone of a recent presentation on web components, wherein Dimitri Glazkov declares:

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

Bruce paraphrases this as:

Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements.

Like Bruce, I’m worried about this year-zero thinking. First of all, I think it’s self-defeating. In my experience, the web technologies that succeed are the ones that build upon what already exists, rather than sweeping everything aside. Evolution, not revolution.

Secondly, web components—or more specifically, custom elements—already allow us to extend existing HTML elements. That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.

But, as Bruce asks:

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Patrick leaves a comment with his answer:

The issue of not extending existing HTML elements is exactly the same that we’ve seen all this time, even before web components: developers who are tip-top JavaScripters, who already plan on doing all the visual feedback/interactions (for mouse users like themselves) in script anyway themselves, so they just opt for the most neutral starting point…a div or a span. Web components now simply gives the option of then sweeping all that non-semantic junk under a nice, self-contained rug.

That’s a depressing thought. But it might very well be true.

Stuart also comments:

Why aren’t web components required to be created with is=“some-component” on an existing HTML element? This seems like an obvious approach; sure, someone who wants to make something meaningless will just do <div is=my-thing> or (worse) <body is=my-thing> but it would provide a pretty heavy hint that you’re supposed to be doing things The Right Way, and you’d get basic accessibility stuff thrown in for free.

That’s a good question. After all, writing <new-shiny></new-shiny> is basically the same as <span is=“new-shiny”></span>. It might not make much of a difference in the case of a span or div, but it could make an enormous difference in the case of, say, form elements.

Take a look at IBM’s library of web components. They’re well-written and they look good, but time and time again, they create new custom elements instead of extending existing HTML.

Although, as Bruce points out:

Of course, not every new element you’ll want to make can extend an existing HTML element.

But I still think that the majority of web components could, and should, extend existing elements. Addy Osmani has put together some design principles for web components and Steve Faulkner has created a handy punch-list for web components, but I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”

Rather than just complain about this kind of thing, I figured I’d try my hand at putting it into practice…

Dave recently made a really nice web component for playing back podcast audio files. I could imagine using something like this on Huffduffer. It’s called podcast-player and you use it thusly:

<podcast-player src="my.mp3"></podcast-player>

One option for providing fallback content would be to include it within the custom element tags:

<podcast-player src="my.mp3">
    <a href="my.mp3">Listen</a>

That would require minimum change to Dave’s code. I’d just need to make sure that the fallback content within podcast-player elements is removed in supporting browsers.

I forked Dave’s code to try out another idea. I figured that if the starting point was a regular link to the audio file, that would also be a way of providing fallback for browsers that don’t cut the web component mustard:

<a href="my.mp3" is="podcast-player">Listen</a>

It required surprisingly few changes to the code. I needed to remove the fallback content (that “Listen” text), and I needed to prevent the default behaviour (following the href), but it was fairly straightforward.

However, I’m sure it could be improved in one of two ways:

  1. I should probably supply an ARIA role to the extended link. I’m not sure what would be the right one, though …menu or menubar perhaps?
  2. Perhaps a link isn’t the right element to extend. Really I should be extending an audio element (which itself allows for fallback content). When I tried that, I found it too hard to overcome the default browser rules for hiding anything between the opening and closing tags. But you’re smarter than me, so I bet you could create <audio is=“podcast-player”>.

Fork the code and have at it.

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.


Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Web Components

The Extensible Web Summit is taking place in Berlin today because Web Components are that important. I wish I could be there, but I’ll make do with the live notes, the IRC channel, and the octothorpe tag.

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous. That’s probably a good sign.

Here’s what I wrote after the last TAG meetup in London:

This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

And here’s what I wrote after the Edge conference:

If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once.

To explain…

The exciting thing about Web Components is that they give developers as much power as browser makers.

The frightening thing about Web Components is that they give developers as much power as browser makers.

When browser makers—and other contributors to web standards—team up to hammer out new features in HTML, they have design principles to guide them …at least in theory. First and foremost—because this is the web, not some fly-by-night “platform”—is the issue of compatability:

Support existing content

Degrade gracefully

You can see those principles at work with newly-minted elements like canvas, audio, video where fallback content can be placed between the opening and closing tags so that older user agents aren’t left high and dry (which, in turn, encourages developers to start using these features long before they’re universally supported).

You can see those principles at work in the design of datalist.

You can see those principles at work in the design of new form features which make use of the fact that browsers treat unknown input types as type="text" (again, encouraging developers to start using the new input long before they’re supported in every browser).

When developers are creating new Web Components, they could apply that same amount of thought and care; Chris Scott has demonstrated just such a pattern. Switching to Web Components does not mean abandoning progressive enhancement. If anything they provide the opportunity to create whole new levels of experience.

Web developers could ensure that their Web Components degrade gracefully in older browsers that don’t support Web Components (and no, “just polyfill it” is not a sustainable solution) or, for that matter, situations where JavaScript—for whatever reason—is not available.

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

This wouldn’t matter so much if developers were only shooting themselves in the foot, but because of the wonderful spirit of sharing on the web, we might well end up shooting others in the foot too:

  1. I make something (to solve a problem).
  2. I’m excited about it.
  3. I share it.
  4. Others copy and paste what I’ve made.

Most of the time, that’s absolutely fantastic. But if the copying and pasting happens without critical appraisal, a lot of questionable decisions can get propagated very quickly.

To give you an example…

When Apple introduced the iPhone, it provided a mechanism to specify that a web page shouldn’t be displayed in a zoomed-out view. That mechanism, which Apple pulled out of their ass without going through any kind of standardisation process, was to use the meta element with a name of “viewport”:

<meta name="viewport" value="...">

The value attribute of a meta element takes a comma-separated list of values (think of name="keywords": you provide a comma-separated list of keywords). But in an early tutorial about the viewport value, code was provided which showed values separated with semicolons (like CSS declarations). People copied and pasted that code (which actually did work in Mobile Safari) and so every browser must support that usage:

Many other mobile browsers now support this tag, although it is not part of any web standard. Apple’s documentation does a good job explaining how web developers can use this tag, but we had to do some detective work to figure out exactly how to implement it in Fennec. For example, Safari’s documentation says the content is a “comma-delimited list,” but existing browsers and web pages use any mix of commas, semicolons, and spaces as separators.

Anyway, that’s just one illustration of how code gets shared, copied and pasted. It’s especially crucial during the introduction of a new technology to try to make sure that the code getting passed around is of a high quality.

I feel kind of bad saying this because the introductory phase of any new technology should be a time to say “Hey, go crazy! Try stuff out! See what works and what doesn’t!” but because Web Components are so powerful I think that mindset could end up doing a lot of damage.

Web developers have been given powerful features in the past. Vendor prefixes in CSS were a powerful feature that allowed browsers to push the boundaries of CSS without creating a Tower of Babel of propietary properties. But because developers just copied and pasted code, browser makers are now having to support prefixes that were originally scoped to different rendering engines. That’s not the fault of the browser makers. That’s the fault of web developers.

With Web Components, we are being given a lot of rope. We can either hang ourselves with it, or we can make awesome …rope …structures …out of rope this analogy really isn’t working.

I’m not suggesting we have some kind of central authority that gets to sit in judgement on which Web Components pass muster (although Addy’s FIRST principles are a great starting point). Instead I think a web of trust will emerge.

If I see a Web Component published by somebody at Paciello Group, I can be pretty sure that it will be accessible. Likewise, if Christian publishes a Web Component, it’s a good bet that it will use progressive enhancement. And if any of the superhumans at Filament Group share a Web Component, it’s bound to be accessible, performant, and well thought-out.

Because—as is so often the case on the web—it’s not really about technologies at all. It’s about people.

And it’s precisely because it’s about people that I’m so excited about Web Components …and simultaneously so nervous about Web Components.

Higher standards

Many people are—quite rightly, in my opinion—upset about the prospect of DRM landing in the W3C HTML specification at the behest of media companies like Netflix and the MPAA.

This would mean that a web browser would have to include support for the plugin-like architecture of Encrypted Media Extensions if they want to claim standards compliance.

A common rebuttal to any concerns about this is that any such concerns are hypocritical. After all, we’re quite happy to use other technologies—Apple TV, Silverlight, etc.—that have DRM baked in.

I think that this rebuttal is a crock of shit.

It is precisely because other technologies are locked down that it’s important to keep the web open.

I own an Apple TV. I use it to watch Netflix. So I’m using DRM-encumbered technologies all the time. But I will fight tooth and nail to keep DRM out of web browsers. That’s not hypocrisy. That’s a quarantine measure.

Stuart summarises the current situation nicely:

From what I’ve seen, this is a discussion of pragmatism: given that DRM exists and movies use it and people want movies, is it a good idea to integrate DRM movie playback more tightly with the web?

His conclusion perfectly encapsulates why I watch Netflix on my Apple TV and I don’t want DRM on the web:

The argument has been made that if the web doesn’t embrace this stuff, people won’t stop watching videos: they’ll just go somewhere other than the web to get them, and that is a correct argument. But what is the point in bringing people to the web to watch their videos, if in order to do so the web becomes platform-specific and unopen and balkanised?

As an addendum, I heard a similar “you’re being a hypocrite” argument when I raised security concerns about EME at the last TAG meetup in London:

I tried to steer things away from the ethical questions and back to the technical side of things by voicing my concerns with the security model of EME. Reading the excellent description by Henri, sentences like this should give you the heebie-jeebies:

Neither the browser nor the JavaScript program understand the bytes.

Alex told me that my phone already runs code that I cannot inspect and does things that I have no control over. So hey, what does it matter if my web browser does the same thing, right?

I’m reminded of something that Anne wrote four years ago when a vulnerability was discovered that affected Flash, Java, and web browsers:

We have higher standards for browsers.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.


The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.


Stop me if you’ve heard this one before. You’re reading an article on Smashing Magazine or A List Apart or some other publication. The article is about a specific feature of CSS, or maybe JavaScript, or perhaps it’s exploring some of the newer additions to HTML. The article is good. It explains how to use this particular feature in your work. Then you read the comments. The first comment is inevitably from someone bemoaning the fact that they can’t use this feature because it isn’t supported in every browser. Specifically, it isn’t supported in some older version of Internet Explorer that they have to support. Therefore the entire article is rendered null and void.

That attitude infuriates and depresses me. It seems to me that it demonstrates a fundamental mismatch between how that person views the job of web development and the way the web actually works.

It is entirely possible—nay, desirable—to use features long before they are supported in every browser. That’s how we move the web forward. If we waited until there was universal support for a feature before we used it, we’d still be using CSS 1.0 and HTML 2.0.

If you use a CSS feature that isn’t supported in a particular browser—like, say, an older version of Internet Explorer—that browser will simply ignore that CSS rule. So you don’t get that rounded corner, or text shadow, or whatever it was. Browsers have the same error-handling mechanism for HTML: if they see something they don’t understand, they just ignore it. The browser will not throw an error. The browser will not stop rendering the page. Browsers are very liberal in what they accept.

It’s a bit trickier with JavaScript: browsers will throw an error; browsers will stop processing the script. That’s why it’s important to use feature detection. That’s also why you definitely don’t want to rely on JavaScript for rendering your content—it’s the most fragile layer of the front-end stack. Note, I’m not saying don’t use JavaScript; I’m saying don’t rely on JavaScript. Otherwise you’ve got yourself a SPOF.

Anyway, my point is—and I can’t believe I still have to repeat this after all these years—websites do not need to look exactly the same in every browser.

“But my client!”, cries the Smashing Magazine commenter, “But my boss!”

If your client or boss expects that a website will look and behave the same in every browser on every device, then where did they get those expectations from? And rather than spending your time trying to meet those impossible expectations, I think your time would be better spent explaining why those expectations don’t match the reality of the web.

It’s like Mike Monteiro says about clients: if they just don’t get something about your design, that’s not their fault; it’s yours. Explaining your design work is part of your design work. It’s the same with web development. It’s our job to explain how the web works …and how the unevenly-distributed nature of browser capabilities is not a bug, it’s a feature.

That was true fourteen years ago when John Allsopp wrote A Dao Of Web Design, and it’s still true today. Back then, designers and developers were comparing the web to print and finding it wanting. These days, designers and developers are comparing the web to native and finding it wanting. In both cases, I feel like they’re missing the fundamental point of the web: you can provide universal access to content and tasks without providing exactly the same experience for every single browser or device. That’s not a failing of the web—that’s its killer app.

Paul Kinlan published a post called This Is the Web Platform where he tabulates the current state of browser support for various features. “Pretty damning” he says:

the feature support that is ubiquitous across the web is actually pretty small especially if you are supporting IE8.

That’s true …from a certain point of view. But it depends on your definition of “support”. If your definition of “support” is “must look and work identically to the latest version of Chrome”, then yes, you’re going to have a smaller set of features you can use (you’re also going to live a miserable existence). But if your definition of “support” is “must be able to access the content and accomplish the task”, then as long as you’re using progressive enhancement, you can use all the features you want and support Internet Explorer 8, 7, 6, 5 …you can support every browser capable of connecting to the internet.

Like Brad said:

There is a difference between support and optimization.

I think part of the problem may be with the language we use. We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.

It’s a subtle Lakoffian thing, but when we talk about “the browser” as if it were a single entity we might be unconsiously reinforcing the expectation that there is one Platonic ideal of browser rendering and that’s what we’re designing for.

There’s another phrase that bothers me, and it’s the phrase that Paul used in the title of his article: “the web platform”. This is something I talked about back in November in my presentation The Power of Simplicity:

But this idea of the web as a platform, I get why from a marketing perspective, we’d want to use that phrase, because it puts the web on equal footing with genuine platforms.

I would say Flash is a platform, and native: iOS and Android and these things. They are platforms, in that it’s all one bundle. And the web isn’t like that.

What I mean is, if you use the Flash platform, then anyone with the Flash plug-in can get your content. It’s on or off. It’s one or zero; it’s binary. Either they have the platform or they don’t. Either they get all your content, or none of your content.

And it’s similar with native apps. If you’ve got the right phone, you can get my app. All of my app. You don’t get bits of my app, you get all my app. Or you get none of it because you don’t have that particular phone that I’m supporting.

And the web is not like that. The web is not binary, one or zero, on or off. It’s not a platform where you get one hundred per cent or zero per cent. It’s this continuum.

The web is not a platform. It’s a continuum.

The complexity of HTML

Baldur Bjarnason has written down his thoughts on why he thinks HTML is too complex.

Now we’re back to seeing almost the same level of complexity and messiness in most web pages as we saw in the worst days of table-hacking. The semantic elements from HTML5 are largely unused.

The reason for this, as outlined in an old email by Matthew Thomas, is down to the lack of any visible benefit from browsers:

unless there is an immediate visual or behavioural benefit to using an element, most people will ignore it.

That’s a fair point, but I think it works in the opposite direction too. I’ve seen plenty of designers and developers who are reluctant to use HTML elements because of the default browser styling. The legend element is one example. A more recent one is input type=”date”—until there’s a way for authors to have more say about the generated interface, there’s going to be resistance to its usage.

Anyway, Baldur’s complaint is that HTML is increasing in complexity by adding new elements—section, article, etc.—that provide theoretical semantic value, without providing immediate visible benefits. It’s much the same argument that informed Cory Doctorow’s Metacrap essay over a decade ago. It’s a strong argument.

There’s always a bit of a chicken-and-egg situation with any kind of language extension designed to provide data structure: why should authors use the new extensions if there’s no benefit? …and why should parsers (like search engines or browsers) bother doing anything with the new extensions if nobody’s using them?

Whether it’s microformats or new HTML structural elements, the vicious circle remains the same. The solution is to try to make the barrier to entry as low as possible for authors—the parsers/spiders/browsers will follow.

I have to say, I share some of Baldur’s concern. I’ve talked before about the confusion between section and article. Providing two new elements might seem better than providing just one, but in fact it just muddies the waters and confuses authors (in my experience).

But I realise that in the grand scheme of things, I’m nitpicking. I think HTML is in pretty good shape. Baldur said “simply put, HTML is a mess,” and he’s not wrong …but HTML has always been a mess. It’s the worst mess except for all the others that have been tried. When it comes to markup, I think that “perfect” is very much the enemy of “good” (just look at XHTML2).

When I was in San Francisco back in August, I had a good ol’ chat with Tantek about markup complexity. It started when he asked me what I thought of hgroup.

I actually found quite a few use cases for hgroup in my own work …but I could certainly see why it was a dodgy solution. The way that a wrapping hgroup could change the semantic value of an h2, or h3, or whatever …that’s pretty weird, right?

And then Tantek pointed out that there are a number of HTML elements that depend on their wrapper for meaning …and that’s a level of complexity that doesn’t sit well with HTML.

For example, a p is always a paragraph, and em is always emphasis …but li is only a list item if it’s wrapped in ul or ol, and tr is only a table row if it’s wrapped in a table.

(Interestingly, these are the very same elements that browsers will automatically adjust the DOM for—generating ul and table if the author doesn’t include it. It’s like the complexity is damage that the browsers have to route around.)

I had never thought of that before, but the idea has really stuck with me. Now it smells bad to me that it isn’t valid to have a figcaption unless it’s within a figure.

These context-dependent elements increase the learning curve of HTML, and that, in my own opinion, is not a good thing. I like to think that HTML should be easy to learn—and that the web would not have been a success if its lingua franca hadn’t been grokabble by mere mortals.

Tantek half-joked about writing HTML: The Good Parts. The more I think about it, the more I think it’s a pretty good idea. If nothing else, it could make us more sensitive to any proposed extensions to HTML that would increase its complexity without a corresponding increase in value.

Playing TAG

I was up in London yesterday to spend the day with the web developers of a Clearleft client, talking front-end architecture and strategies for implementing responsive design. ‘Twas a good day, although London always tires me out quite a bit.

On this occasion, I didn’t head straight back to Brighton. Instead I braved the subterranean challenges of the Tube to make my way across london to Google Campus, where a panel discussion was taking place. This was Meet The TAG.

TAG is the Technical Architecture Group at the W3C. It doesn’t work on any one particular spec. Instead, it’s a sort of meta-group to steer how standards get specified.

Gathered onstage yesterday evening were TAG members Anne van Kesteren, Tim Berners-Lee, Alex Russell, Yehuda Katz, and Daniel Appelquist (Henry Thompson and Sergey Konstantinov were also there, in the audience). Once we had all grabbed a (free!) beer and settled into our seats, Bruce kicked things off with an excellent question: in the intros, multiple TAG members mentioned their work as guiding emerging standards to make sure they matched the principles of the TAG …but what are those principles?

It seemed like a fairly straightforward question, but it prompted the first rabbit hole of the evening as Alex and Yehuda focussed in on the principle of “layering”—stacking technologies in a sensible way that provides the most power to web developers. It’s an important principle for sure, but it didn’t really answer Bruce’s question. I was tempted to raise my hand and reformulate Bruce’s question into three parts:

  1. Does the Technical Architecture Group have design principles?
  2. If so, what are there?
  3. And are they written down somewhere?

There’s a charter and that contains a mission statement, but that’s not the same as documenting design principles. There is an extensible web manifesto—that does document design principles—which contains the signatures of many (but not all) TAG members …so does that represent the views of the TAG? I’d like to get some clarification on that.

The extensible web manifesto does a good job of explaining the thinking behind projects like web components. It’s all about approaching the design of new browser APIs in a sensible (and extensible) way.

I mentioned that the TAG were a kind of meta-standards body, and in a way, what the extensible web manifesto—and examples like web components—are proposing is a meta-approach to how browsers implement new features. Instead of browser makers (in collaboration with standards bodies) creating new elements, UI widgets and APIs, developers will create new elements and UI widgets.

When Yehuda was describing this process, he compared it with the current situation. Currently, developers have to petition standards bodies begging them to implement some new kind of widget and eventually, if you’re lucky, browsers might implement it. At this point I interrupted to ask—somewhat tongue-in-cheek—”So if we get web components, what do we need standards bodies for?” Alex had an immediate response for that: standards bodies can look at what developers are creating, find the most common patterns, and implement them as new elements and widgets.

“I see,” I said. “So browsers and standards bodies will have a kind of ‘rough consensus’ based on …running code?”

“Yes!”, said Alex, laughing. “Jeremy Keith, ladies and gentlemen!”

So the idea with web components (and more broadly, the extensible web) is that developers will be able to create new elements with associated JavaScript functionality. Currently developers are creating new widgets using nothing but JavaScript. Ideally, web components will result in more declarative solutions and reduce our current reliance on JavaScript to do everything. I’m all for that.

But one thing slightly puzzled me. The idea of everyone creating whatever new elements they want isn’t a new one. That’s the whole idea behind XML (and by extension, XHTML) and yet the very same people who hated the idea of that kind of extensibility are the ones who are most eager about web components.

Playing devil’s advocate, I asked “How come the same people who hated RDF love web components?” (although what I really meant was RDFa—a means of extending HTML).

I got two answers. The first one was from Alex. Crucially, he said, a web component comes bundled with instructions on how it works. So it’s useful. That’s a big, big difference to the Tower of Babel scenario where everyone could just make up their own names for elements, but browsers have no idea what those names mean so effectively they’re meaningless.

That was the serious answer. The other answer I got was from Tim Berners-Lee. With a twinkle in his eye and an elbow in Alex’s ribs he said, “Well, these youngsters who weren’t around when we doing things with XML all want to do things with JSON now, which is a much cooler format because you can store number types in it. So that’s why they want to do everything in JavaScript.” Cheeky trickster!

Anyway, there was plenty of food for thought in the discussion of web components. This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

The evening inevitably included a digression into the black hole of DRM. As always, the discussion got quite heated and I don’t think anybody was going to change their minds. I tried to steer things away from the ethical questions and back to the technical side of things by voicing my concerns with the security model of EME. Reading the excellent description by Henri, sentences like this should give you the heebie-jeebies:

Neither the browser nor the JavaScript program understand the bytes.

But the whole DRM discussion was, fortunately, curtailed by Anne who was ostensibly moderating the panel. Before it was though, Sir Tim made one final point. Because of the heat of the discussion, people were calling for us to separate the societal questions (around intellectual property and payment) from the technical ones (around encryption). But, Sir Tim pointed out, that separation isn’t really possible. Even something as simple as the hyperlink has political assumptions built in about the kind of society that would value being able to link resources together and share them around.

That’s an important point, well worth remembering: all software is political. That’s one of the reasons why I’d really appreciate an explicit documentation of design principles from the Technical Architecture Group.

Still, it was a very valuable event. Bruce has also written down his description of the evening. Many thanks to Dan and the rest of the TAG team for putting it together. I’m very glad I went along. As well as the panel discussion, it was really nice to chat to Paul and have the chance to congratulate Jeni in person on her appearance on her OBE.

Alas, I couldn’t stick around too long—I had to start making the long journey back to Brighton—so I said my goodbyes and exited. I didn’t have the opportunity to speak to Tim Berners-Lee directly, which is probably just as well: I’m sure I would’ve embarrassed myself by being a complete fanboy.

Icon fonts, unicode ranges, and IE8’s compatibility mode

While doing some browser testing this week, Mark come across a particularly wicked front-end problem. Something was triggering compatibility mode in Internet Explorer 8 and he couldn’t figure out what it was.

Compatibility mode was something introduced in IE8 to try not to “break the web”, as Microsoft kept putting it. Effectively it makes IE8 behave like IE7. Why would you ever want to do that? Well, if you make websites exactly the wrong way and code for a specific browser (like, say, IE7), then better, improved browsers are something to be feared and battled against. For the rest of us, better, improved browsers are something to be welcomed.

Shockingly, Microsoft originally planned to have compatibility mode enabled by default in Internet Explorer 8. It was bad enough that they were going to ship a browser with a built-in thermal exhaust port, they also contemplated bundling a proton torpedo with it too. Needless to say, right-minded people were upset at that possibility. I wrote about my concerns back in 2008.

Microsoft changed their mind about the default behaviour, but they still shipped IE8 with the compatibility mode “feature”, which Mark was very much experiencing as a bug. Something in the CSS was triggering compatibility mode, but frustratingly, there was no easy way of figuring out what was doing it. So he began removing chunks of CSS, reducing until he could focus in on the exact piece of CSS that was triggering IE8’s errant behaviour.

Finally, he found it. He was using an icon font. Now, that in itself isn’t enough to give IE8 its conniptions—an icon font is just a web font like any other. The only difference is that this font was using the private use area of the unicode range. That’s the default setting if you’re creating an icon font using the excellent icomoon service. There’s a good reason for that:

Using Latin letters is not recommended for icon fonts. Using the Private Use Area of Unicode is the best option for icon fonts. By using PUA characters, your icon font will be compatible with screen readers. But if you use Latin characters, the screen reader might read single, meaningless letters, which would be confusing.

Well, it turns out that using assigning glyphs to this private use area was causing IE8 to flip into compatibility mode. Once Mark assigned the glyphs to different characters, IE8 started behaving itself.

Now, we haven’t tested to see if this is triggered by all of the 6400 available slots in the UTF-8 private use range. If someone wants to run that test (presumably using some kind of automation), ’twould be much appreciated.

Meantime, just be careful if you’re using the private use area for your icon fonts—you may just inadvertently wake the slumbering beast of compatibility mode.

Classy values

Two articles were published today that take diametrically opposed viewpoints on how developers should be using CSS:

I don’t want to attempt to summarise either article as they both give fairly lengthy in-depth explanations of their positions, although I find that they’re both a bit extreme. They’re both ostensibly about CSS, though in reality they’re more about the class values we add to our HTML (and remember, as Ben points out, class is an HTML attribute; there’s no such thing as CSS classes).

Thierry advocates entirely presentational values, like this:

    <div class="Bfc M-10">

Meanwhile Ben argues that this makes the markup less readable and maintainable, and that we should strive to have human-readable markup, more like the example that Thierry dismisses as inefficient:

    <div class="media">

Here’s my take on it: this isn’t an either/or situation. I think that both extremes are problematic. If you make your class values entirely presentational in order to make your CSS more modular, you’re offloading a maintainability expense onto your markup. But if you stick to strictly semantically-meaningful class values, your CSS is probably going to be a lot harder to write in a modular, maintainable way.

Fortunately, the class attribute takes a space-separated list of values. That means you can have your OOCSS/SMACSS/BEM cake and semantically eat it too:

<div class="media Bfc M-10">

The “media” value describes the content, which is traditionally what a semantically-meaningful class name is supposed to do. Meanwhile, the “Bfc” and “M-10” values say nothing about the nature of the content, but everything about how it should be displayed.

Is it wasteful to use a class value that is never used by the CSS? Possibly. But I think it serves a useful purpose for any humans (developer or end user) reading the markup, or potentially machines parsing/scraping the markup.

I’ve used class values that never get touched by CSS. Here on adactio.com, if I want to mark up something as being a scare quote, I’ll write:

<q class="scare">scare quote</q>

But nowhere in my CSS do I use a selector like this:

q.scare { }

Speaking of scare quotes….

Both of the aforementioned articles begin by establishing that their approach is the minority viewpoint and that web developers everywhere are being encouraged to adapt the other way of working.

Ben writes:

Most disconcertingly, these methodologies have seen widespread adoption thanks to prominent bloggers evangelising their usage as ‘best practice’.

Meanwhile Thierry writes:

But when it comes to the presentational layer, “best practice” goes way beyond the separation of resources.

Perhaps both authors have more in common than they realise: they certainly seem to agree that any methodology you don’t agree with should be labelled as a best practice and wrapped in scare quotes.

Frankly, I think this attitude does more harm than good. A robust argument should stand on its own strengths—it shouldn’t rely on knocking down straw men that supposedly represent the opposing viewpoint.

Some of the trigger words that grind my gears are: dogma, zealot, purist, sacred, and pedant. I’ve mentioned this before:

Whenever someone labels those they disagree with as “dogmatic” or “purist”, it’s a lazy meaningless barb (like calling someone a hipster). “I’m passionate; you’re dogmatic. I sweat the details; you’re a purist.” Even when I agree completely with the argument being made—as was the case withAndy’s superb talk at South by Southwest this year—I cringe to hear the “dogma” attack employed: especially when the argument is strong enough to stand up on its own without resorting to Croftian epithets.

That’s what I was getting at when I tried to crack this joke on Twitter:

…but all I ended up doing was making a cheap shot about designers (and developers for that matter), which wasn’t my intention. The point I was intending to make was that we all throw a lot of stones from our glasses houses.

So if you’re going to write an article about the right way to do something, don’t start by labelling dissenting schools of thought as dogmatic or purist.

Physician, heal thyself.

The ghost of browsers past

Even before a line of code was written for the line-mode browser simulator when we gathered together at CERN, there was a gleeful period of digital spelunking.

Brian goes browsing Demonstration data sources

We poked at the markup of the first ever website

  • What’s that NEXTID element? Turns it out it’s something specific to the NeXT operating system.
  • Why does the first iteration of HTML already contain H1 through to H6? It’s because they were lifted wholesale from a flavour of SGMLStandard Generalized Markup Language—that was already in use at CERN.

Oh, and Brian asked Robert Cailliau why they went with the term World Wide Web. “Well,” he said, “we had to call it something. And we thought we could always change it later.”

Then there was the story of the line-mode browser. It was created by Nicola Pellow, who was a student at CERN in 1990. She later worked on the Mac browser but her involvement with kickstarting the world wide web ended around 1993. She never showed up to any of the reunions.

We poked around in the (surprisingly short) source code of the line-mode browser. We found the lines that described how elements should be styled—the term “style sheet” appeared in a comment!

Proto-stylesheet Parsing the parser

If you’ve fired up the line-mode browser simulator and run some websites through it, you’ll probably see occasions where a whole bunch of JavaScript—nestled between script tags in the head of the document—gets rendered to the screen.


We could’ve hidden that JavaScript, but we made a deliberate decision to display it. That’s what the line-mode browser would have done. The script element didn’t exist back then. Heck, JavaScript didn’t exist back then. So browsers would have handled the unknown element in the standard HTML way: ignore the opening and closing tags and just render what’s in-between them. That’s still the error-handling model for unrecognised elements in HTML.

This is why we used to write our JavaScript like this:

<script language="JavaScript" type="text/javascript">
(JavaScript goes here)

The HTML comments stopped the JavaScript from being rendered to the screen in older browsers (like the line-mode browser). Using the opening HTML comment <!-- is functionally equivalent to // single-line comments in JavaScript …although you still need to prefix the closing --> comment with a //.

I remember doing this when I first started making websites in the 90s. You can see it if you view source on the first version of this website.

Later on, we all switched to XHTML so we updated the syntax to make it valid XML.

<script type="text/javascript">
(JavaScript goes here)

The <![CDATA[ part stops an XML parser from trying to parse the JavaScript. But HTML parsers would choke on that because it starts with an angle bracket. Hence the JavaScript-style // comment.

Anyway, we don’t bother with HTML or XHTML comments at the start of our script blocks anymore. And that’s why the line-mode browser simulator renders the JavaScript to the screen.

Note that the JavaScript isn’t executed. That’s thanks to a clever little hack by Remy: the line-mode browser simulator changes the type attribute of every script element to text/plain, effectively defusing them. Smart!

Double tap delay

Even though my encounter with Ted yesterday was brief, we still managed to turn the conversation to browsers, standards, and all things web in our brief chat.

Specifically, we talked about this proposal in Blink related to the 300 millisecond delay that mobile browsers introduce after a tap event.

Why do browsers have this 300 millisecond delay? Well, you know when you’re looking at fixed-width desktop-based website on a mobile phone, and everything is zoomed out, and one of the ways that you can zoom in to a specific portion of the page is to double tap on that content? A double tap is defined as two taps less than 300 milliseconds apart. So whenever you tap on something in a touch-based browser, it needs to wait for that length of time to see if you’re going to turn that single tap into a double tap.

The overall effect is that tap actions feel a little bit laggy on the web compared to native apps. You can fix this by using the fastclick code from FT Labs, but I always feel weird solving a problem on mobile by throwing more front-end code at it.

Hence the Blink proposal: if the author has used a meta viewport declaration to set width=device-width (effectively saying “hey, I know what I’m doing: this content doesn’t need to be zoomed”), then the 300 millisecond delay could be removed from tap events. Note: this only affects double taps—pinch zoom is unaffected.

This sounds like a sensible idea to me, but Ted says that he sometimes still likes to double tap to zoom even in responsive designs. He’d prefer a per-element solution rather than a per-document meta element. An attribute? Or maybe a CSS declaration similar to pointer events?

I thought for a minute, and then I spitballed this idea: what if the 300 millisecond delay only applied to non-focusable elements?

After all, the tap delay is only noticeable when you’re trying to tap on a focusable element: links, buttons, form fields. Double tapping tends to happen on text content: divs, paragraphs, sections. That’s assuming you are actually using buttons and links for buttons and links—not spans or divs a-la Google.

And if the author decides they want to remove the tap delay on a non-focusable element, they can always make it focusable by adding tabindex=-1 (if that still works …does that still work? I don’t even know any more).

Anyway, that was my not-very-considered idea, but on first pass, it doesn’t strike me as being obviously stupid or crazy.

So, how about it, browser makers? Does removing the 300 millisecond delay on focusable elements—possibly in combination with the meta viewport declaration—make sense?

Placehold on tight

I’m a big fan of the placeholder attribute introduced in HTML5. In my book, I described the cowpath it was paving:

  1. When a form field has no value, insert some placeholder text into it.
  2. When the user focuses on that field, remove the placeholder text.
  3. If the user leaves the field and the field still has no value, reinstate the placeholder text.

That’s the behaviour that browsers mimicked when they began implementing the native placeholder functionality. I think Opera was first. Now all the major browsers support it.

But in some browsers, the details of that behaviour have changed slightly. In Chrome and Safari, when the user focuses on the field, the placeholder text remains. It’s not until the user actually begins to type that the placeholder text is removed.

Now, personally speaking, I’m not keen on this variation. It seems that I’m not alone. In an email to the WHATWG, Markus Ernst describes the problems that he’s noticed in user-testing where users are trying (and, of course, failing) to select the placeholder text in order to delete it before they begin typing.

It seems that a relevant number of users do not even try to start typing as long as the placeholder text remains visible.

But this isn’t so clear-cut. A quick straw poll at the Clearleft showed that opinions were divided on this. Some people prefer the newer behaviour …however it quickly became apparent that the situations they were thinking of were examples of where placeholder has been abused i.e. attempt to act as a label for the form field. In that situation, I agree, it would definitely be more useful for the labelling text to remain visible for as long as possible. But that’s not what placeholder is for. The placeholder attribute is intended to show a short hint (such as an example value)—it should be used in addition to a label; not instead of a label. I tend to use example content in my placeholder value and I nearly always begin with “e.g.”:

<label for="fn">Your Name</label>
<input id="fn" name="fn" type="text" placeholder="e.g. Joe Bloggs">

(Don’t forget: generating placeholders from datalists can be a handy little pattern.)

So if you’re using placeholder incorrectly as a label, then the WebKit behaviour is probably what you want. But if you’re using placeholder as intended, then the behaviour in the other browsers is probably more desirable. If you want to get Safari and Chrome to mimic the behaviour of the other browsers, here’s a handy bit of CSS (from that same thread on the WHATWG mailing list):

[placeholder]:focus::-webkit-input-placeholder {
  color: transparent;

You can see that in action on search forms at The Session for recordings, events, discussions, etc.

Now, if you do want your label—or input mask—to appear within your form field and remain even when the user focuses on the field, go ahead and do that. Use a label element with some CSS and JavaScript trickery to get the effect you want. But don’t use the placeholder attribute.