Tags: google



The imitation game

Jason shared some thoughts on designing progressive web apps. One of the things he’s pondering is how much you should try make your web-based offering look and feel like a native app.

This was prompted by an article by Owen Campbell-Moore over on Ev’s blog called Designing Great UIs for Progressive Web Apps. He begins with this advice:

Start by forgetting everything you know about conventional web design, and instead imagine you’re actually designing a native app.

This makes me squirm. I mean, I’m all for borrowing good ideas from other media—native apps, TV, print—but I don’t think that inspiration should mean imitation. For me, that always results in an interface that sits in a kind of uncanny valley of being almost—but not quite—like the thing it’s imitating.

With that out of the way, most of the recommendations in Owen’s article are sensible ideas about animation, input, and feedback. But then there’s recommendation number eight: Provide an easy way to share content:

PWAs are often shown in a context where the current URL isn’t easily accessible, so it is important to ensure the user can easily share what they’re currently looking at. Implement a share button that allows users to copy the URL to the clipboard, or share it with popular social networks.

See, when a developer has to implement a feature that the browser should be providing, that seems like a bad code smell to me. This is a problem that Opera is solving (and Google says it is solving, while meanwhile penalising developers who expose the URL to end users).

Anyway, I think my squeamishness about all the advice to imitate native apps is because it feels like a cargo cult. There seems to be an inherent assumption that native is intrinsically “better” than the web, and that the only way that the web can “win” is to match native apps note for note. But that misses out on all the things that only the web can do—instant distribution, low-friction sharing, and the ability to link to any other resource on the web (and be linked to in turn). Turning our beautifully-networked nodes into standalone silos just because that’s the way that native apps have to work feels like the cure that kills the patient.

If anything, my advice for building a progressive web app would be the exact opposite of Owen’s: don’t forget everything you’ve learned about web design. In my opinion, the term “progressive web app” can be read in order of priority:

  1. Progressive—build in a layered way so that anyone can access your content, regardless of what device or browser they’re using, rewarding the more capable browsers with more features.
  2. Web—you’re building for the web. Don’t lose sight of that. URLs matter. Accessibility matters. Performance matters.
  3. App—sure, borrow what works from native apps if it makes sense for your situation.

Jason asks questions about how your progressive web app will behave when it’s added to the home screen. How much do you match the platform? How do you manage going chromeless? And the big one: what do users expect?

Will people expect an experience that maps to native conventions? Or will they be more accepting of deviation because they came to the app via the web and have already seen it before installing it?

These are good questions and I share Jason’s hunch:

My gut says that we can build great experiences without having to make it feel exactly like an iOS or Android app because people will have already experienced the Progressive Web App multiple times in the browser before they are asked to install it.

In all the messaging from Google about progressive web apps, there’s a real feeling that the ability to install to—and launch from—the home screen is a real game changer. I’m not so sure that we should be betting the farm on that feature (the offline possibilities opened up by service workers feel like more of a game-changer to me).

People have been gleefully passing around the statistic that the average number of native apps installed per month is zero. So how exactly will we measure the success of progressive web apps against native apps …when the average number of progressive web apps installed per month is zero?

I like Android’s add-to-home-screen algorithm (although it needs tweaking). It’s a really nice carrot to reward the best websites with. But let’s not carried away. I think that most people are not going to click that “add to home screen” prompt. Let’s face it, we’ve trained people to ignore prompts like that. When someone is trying to find some information or complete a task, a prompt that pops up saying “sign up to our newsletter” or “download our native app” or “add to home screen” is a distraction to be dismissed. The fact that only the third example is initiated by the operating system, rather than the website, is irrelevant to the person using the website.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome.

My hunch is that the majority of people will still interact with your progressive web app via a regular web browser view. If, then, only a minority of people are going to experience your site launched from the home screen in a native-like way, I don’t think it makes sense to prioritise that use case.

The great thing about progressive web apps is that they are first and foremost websites. Literally everyone who interacts with your progressive web app is first going to do so the old-fashioned way, by following a link or typing in a URL. They may later add it to their home screen, but that’s just a bonus. I think it’s important to build progressive web apps accordingly—don’t pretend that it’s just like building a native app just because some people will be visiting via the home screen.

I’m worried that developers are going to think that progressive web apps are something that need to built from scratch; that you have to start with a blank slate and build something new in a completely new way. Now, there are some good examples of these kind of one-off progressive web apps—The Guardian’s RioRun is nicely done. But I don’t think that the majority of progressive web apps should fall into that category. There’s nothing to stop you taking an existing website and transforming it step-by-step into a progressive web app:

  1. Switch over to HTTPS if you aren’t already.
  2. Use a service worker, even if it’s just to provide a custom offline page and cache some static assets.
  3. Make a manifest file to point to an icon and specify some colours.

See? Not exactly a paradigm shift in how you approach building for the web …but those deceptively straightforward steps will really turbo-boost your site.

I’m really excited about progressive web apps …but mostly for the “progressive” and “web” parts. Maybe I’ll start calling them progressive web sites. Or progressive web thangs.

Backdoor Service Workers

When I was moderating that panel at the Progressive Web App dev Summit, I brought up this point about twenty minutes in:

Alex, in your talk yesterday you were showing the AMP demo there with the Washington Post. You click through and there’s the Washington Post AMP thing, and it was able to install the Service Worker with that custom element. But I was looking at the URL bar …and that wasn’t the Washington Post. It was on the CDN from AMP. So I talked to Paul Backaus from the AMP team, and he explained that it’s an iframe, and using an iframe you can install a Service Worker from somewhere else.

Alex and Emily explained that, duh, that’s the way iframes work. It makes sense when you think about it—an iframe is pretty much the same as any other browser window. Still, it feels like it might violate the principle of least surprise.

Let’s say you followed my tongue-in-cheek advice to build a progressive web app store. Your homepage might have the latest 10 or 20 progressive web apps. You could also include 10 or 20 iframes so that those sites are “pre-installed” for the person viewing your page.

Enough theory. Here’s a practical example…

Suppose you’ve never visited the website for my book, html5forwebdesigners.com (if you have visited it, and you want to play along with this experiment, go to your browser settings and delete anything stored by that domain).

You happen to visit my website adactio.com. There’s a little blurb buried down on the home page that says “Read my book” with a link through to html5forwebdesigners.com. I’ve added this markup after the link:

<iframe src="https://html5forwebdesigners.com/iframe.html" style="width: 0; height: 0; border: 0">

That hidden iframe pulls in an empty page with a script element:

<!DOCTYPE html>
<html lang="en">
<meta charset="utf-8">
<title>HTML5 For Web Designers</title>
if ('serviceWorker' in navigator) {

That registers the Service Worker on my book’s site which then proceeds to install all the assets it needs to render the entire site offline.

There you have it. Without ever visiting the domain html5forwebdesigners.com, the site has been pre-loaded onto your device because you visited the domain adactio.com.

A few caveats:

  1. I had to relax the Content Security Policy for html5forwebdesigners.com to allow the iframe to be embedded on adactio.com:

    Header always set Access-Control-Allow-Origin: "https://adactio.com"
  2. If your browser’s settings has “Block third-party cookies and site data” selected in the preferences, the iframe-invoked Service Worker won’t install:

    Uncaught (in promise) DOMException: Failed to register a ServiceWorker: The user denied permission to use Service Worker.

The example I’ve put together here is relatively harmless. But it’s possible to imagine more extreme scenarios. Imagine there’s a publishing company that has 50 websites for 50 different publications. Each one of them could have an empty page waiting to be embedded via iframe from the other 49 sites. You only need to visit one page on one of those 50 sites to have 50 Service Workers spun up and caching assets in the background.

There’s the potential here for a tragedy of the commons. I hope we’ll be sensible about how we use this power.

Just don’t tell the advertising industry about this.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

Progressive web app store

Remember when Chrome developers decided to remove the “add to home screen” prompt for progressive web apps that used display: browser in their manifest files? I wasn’t happy.

Alex wrote about their plans to offer URL access for all installed progressive web apps, regardless of what’s in the manifest file. I look forward to that. In the meantime, it makes no sense to punish the developers who want to give users access to URLs.

Alex has acknowledged the cart-before-horse-putting, and written a follow-up post called PWA Discovery: You Ain’t Seen Nothin Yet:

The browser’s goal is clear: create a hurdle tall enough that only sites that meet user expectations of “appyness” will be prompted for. Maybe Chrome’s version of this isn’t great! Feedback like Ada’s, Andrew’s, and Jeremy’s is helpful is letting us know how to improve. Thankfully, in most of the cases flagged so far, we’ve anticipated the concerns but maybe haven’t communicated our thinking as well as we should have. This is entirely my fault. This post is my penance.

It turns out that the home-screen prompt was just the first stab. There’s a really interesting idea Alex talks about called “ambient badging”:

Wouldn’t it be great if there were a button in the URL bar that appeared whenever you landed on a PWA that you could always tap to save it to your homescreen? A button that showed up in the top-level UI only when on a PWA? Something that didn’t require digging through menus and guessing about “is this thing going to work well when launched from the homescreen?”

I really, really like this idea. It kind of reminds me of when browsers would flag up whether or not a website had an RSS feed, and allow you to subscribe right then and there.

Hold that thought. Because if you remember the history of RSS, it ended up thriving and withering based on the fortunes of one single RSS reader.

Whenever the discoverability of progressive web apps comes up, the notion of an app store for the web is inevitably floated. Someone raised it as a question at one of the Google I/O panels: shouldn’t Google provide some kind of app store for progressive web apps? …to which Jake cheekily answered that yes, Google should create some kind of engine that would allow people to search for these web apps.

He’s got a point. Progressive web apps live on the web, so any existing discovery method on the web will work just fine. Remy came to a similar conclusion:

Progressive web apps allow users to truly “visit our URL to install our app”.

Also, I find it kind of odd that people think that it needs to be a company the size of Google that would need to build any kind of progressive web app store. It’s the web! Anybody can build whatever they want, without asking anyone else for permission.

So if you’re the entrepreneurial type, and you’re looking for the next Big Idea to make a startup out of, I’ve got one for you:

Build a directory of progressive web apps.

Call it a store if you want. Or a marketplace. Heck, you could even call it a portal, because, let’s face it, that’s kind of what app stores are.

Opera have already built you a prototype. It’s basic but it already has a bit of categorisation. As progressive web apps get more common though, what we’re really going to need is curation. Again, there’s no reason to wait for somebody else—Google, Opera, whoever—to build this.

Oh, I guess I should provide a business model too. Hmmm …let me think. Advertising masquerading as “featured apps”? I dunno—I haven’t really thought this through.

Anyway, you might be thinking, what will happen if someone beats you to it? Well, so what? People will come to your progressive web app directory because of your curation. It’s actually a good thing if they have alternatives. We don’t want a repeat of the Google Reader situation.

It’s hard to recall now, but there was a time when there wasn’t one dominant search engine. There’s nothing inevitable about Google “owning” search or Facebook “owning” social networking. In fact, they both came out of an environment of healthy competition, and crucially neither of them were first to market. If that mattered, we’d all still be using Yahoo and Friendster.

So go ahead and build that progressive web app store. I’m serious. It will, of course, need to be a progressive web app itself so that people can install it to their home screens and perhaps even peruse your curated collection when they’re offline. I could imagine that people might even end up with multiple progressive web app stores added to their home screens. It might even get out of control after a while. There’d need to be some kind of curation to help people figure out the best directory for them. Which brings me to my next business idea:

Build a directory of directories of progressive web apps…

A wager on the web

Jason has written a great post about progressive web apps. It’s also a post about whether fears of the death of the web are justified.

Lately, I vacillate on whether the web is endangered or poised for a massive growth due to the web’s new capabilities. Frankly, I think there are indicators both ways.

So he applies Pascal’s wager. The hypothesis is that the web is under threat and progressive web apps are a solution to fighting that threat.

  • If the hypothesis is incorrect and we don’t build progressive web apps, things continue as they are on the web (which is not great for users—they have to continue to put up with fragile, frustratingly slow sites).
  • If the hypothesis is incorrect and we do build progressive web apps, users get better websites.
  • If the hypothesis is correct and we do build progressive web apps, users get better websites and we save the web.
  • If the hypothesis is correct and we don’t build progressive web apps, the web ends up pining for the fjords.

Whether you see the web as threatened or see Chicken Little in people’s fears and whether you like progressive web apps or feel it is a stupid Google marketing thing, we can all agree that putting energy into improving the experience for the people using our sites is always a good thing.

Jason is absolutely correct. There are literally no downsides to us creating progressive web apps. Everybody wins.

But that isn’t the question that people have been tackling lately. None of these (excellent) blog posts disagree with the conclusion that building progressive web apps as originally defined would be a great move forward for the web:

The real question that comes out of those posts is whether it’s good or bad for the future of progressive web apps—and by extension, the web—to build stop-gap solutions that use some progressive web app technologies (Service Workers, for example) while failing to be progressive in other ways (only working on mobile devices, for example).

In this case, there are two competing hypotheses:

  1. In the short term, it’s okay to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because over time they’ll get improved and we’ll end up with proper progressive web apps in the long term.
  2. In the short term, we should build proper progressive web apps, and it’s a really bad idea to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because that encourages more people to build sub-par websites and progressive web apps become synonymous with door-slamming single-page apps in the long term.

The second hypothesis sounds pessimistic, and the first sounds optimistic. But the people arguing for the first hypothesis aren’t coming from a position of optimism. Take Christian’s post, for example, which I fundamentally disagree with:

End users deserve to have an amazing, form-factor specific experience. Let’s build those.

I think end users deserve to have an amazing experience regardless of the form-factor of their devices. Christian’s viewpoint—like Alex’s tweetstorm—is rooted in the hypothesis that the web is under threat and in danger. The conclusion that comes out of that—building mobile-only JavaScript-reliant progressive web apps is okay—is a conclusion reached through fear.

Never make any decision based on fear.

Regression toward being mean

I highly recommend Remy’s State Of The Gap post—it’s ace. He summarises it like this:

I strongly believe in the concepts behind progressive web apps and even though native hacks (Flash, PhoneGap, etc) will always be ahead, the web, always gets there. Now, today, is an incredibly exciting time to be build on the web.

I agree completely. That might sound odd after I wrote about Regressive Web Apps, but it’s precisely because I’m so excited by the technologies behind progressive web apps that I think it’s vital that we do them justice. As Remy says:

Without HTTPS and without service workers, you can’t add to homescreen. This is an intentionally high bar of entry with damn good reasons.

When the user installs a PWA, it has to work. It’s our job as web developers to provide the most excellent experience for our users.

It has to work.

That’s why I don’t agree with Dion’s metrics for what makes a progressive web app:

If you deliver an experience that only works on mobile is that a PWA? Yes.

I think it’s important to keep quality control high. Being responsive is literally the first item in the list of qualities that help define what a progressive web app is. That’s why I wrote about “regressive” web apps: sites that are supposed to showcase what we can do but instead take a step backwards into the bad old days of separate sites for separate device classes: washingtonpost.com/pwa, m.flipkart.com, lite.5milesapp.com, app.babe.co.id, m.aliexpress.com.

A lot of people on Twitter misinterpreted my post as saying “the current crop of progressive web apps are missing the mark, therefore progressive web apps suck”. What I was hoping to get across was “the current crop of progressive web apps are missing the mark, so let’s make better ones!”

Now, I totally understand that many of these examples are a first stab, a way of testing the waters. I absolutely want to encourage these first attempts and push them further. But I don’t think that waiving the qualifications for progressive web apps helps achieves that. As much as I want to acknowledge the hard work that people have done to create those device-specific examples, I don’t think we should settle for anything less than high-quality progressive web apps that are as much about the web as they are about apps.

Simply put, in this instance, I don’t think good intentions are enough.

Which brings me to the second part of Regressive Web Apps, the bit about Chrome refusing to show the “add to home screen” prompt for sites that want to have their URL still visible when launched from the home screen.

Alex was upset by what I wrote:

if you think the URL is going to get killed on my watch then you aren’t paying any attention whatsoever.

so, your choices are to think that I have a secret plan to kill URLs, or conclude I’m still Team Web.

I’m galled that anyone, particularly you @adactio, would think the former…but contrarianism uber alles?

I am very, very sorry that I upset Alex like this.

But I stand by my criticism of the actions of the Chrome team. Because good intentions are not enough.

I know that Alex is huge fan of URLs, and of the web. Heck, just about everybody I know that works on Chrome in some capacity are working for the web first and foremost: Alex, Jake, various and sundry Pauls. But that doesn’t mean I’m going to stay quiet when I see the Chrome team do something I think is bad for the web. If anything, it’s precisely because I hold them to a high standard that I’m going to sound the alarm when I see what I consider to be missteps.

I think that good people can make bad decisions with the best of intentions. Usually it involves long-term thinking—something I think is very important. “The ends justify the means” is a way of thinking that can create a lot of immediate pain, even if it means a better future overall. Balancing those concerns is front and centre of the Chromium project:

As browser implementers, we find that there’s often tension between (a) moving the web forward and (b) preserving compatibility. On one hand, the web platform API surface must evolve to stay relevant. On the other hand, the web’s primary strength is its reach, which is largely a function of interoperability.

For example, when Alex talks of the Web Component era as though it were an inevitability, I get nervous. Not for myself, but for the millions of Opera Mini users out there. How do we get to a better future without leaving anyone behind? Or do we sacrifice those people for the greater good? Do the needs of the many outweigh the needs of the few? Do the ends justify the means?

Now, I know for a fact that the end-game that Alex is pursuing with web components—and the extensible web manifesto in general—is a more declarative web: solutions that first get tackled as web components end up landing in browsers. But to get there, the solutions are first created using modern JavaScript that simply doesn’t work everywhere. Is that the price we’re going to have to pay for a better web?

I hope not. I hope we can find ways to have our accessible cake and eat it too. But it will be really, really hard.

Returning to progressive web apps, I was genuinely shocked and appalled at the way that the Chrome team altered the criteria for the “add to home screen” prompt to discourage exposing URLs. I was also surprised at how badly the change was communicated—it was buried in a bug report that five people contributed to before pushing the change. I only found out about it through a conversation with Paul Kinlan. Paul encouraged me to give feedback, and that’s what I did on my website, just like Stuart did on his.

Of course the Chrome team are working on ways of exposing URLs within progressive web apps that are launched in from the home screen. Opera are working on it too. But it’s a really tricky problem to solve. It’s not enough to say “we’ll figure it out”. It’s not enough to say “trust us.”

I do trust the people I know working on Chrome. I also trust the people I know at Mozilla, Opera and Microsoft. That doesn’t mean I’m going to let their actions go unquestioned. Good intentions are not enough.

As Alex readily acknowledges, the harder problem (figuring out how to expose URLs) should have been solved first—then the change to the “add to home screen” metrics would be uncontentious. Putting the cart before the horse, discouraging display:browser now, while saying “trust us, we’ll figure it out”, is another example of saying the ends justify the means.

But the stakes are too high here to let this pass. Good intentions are not enough. Knowing that the people working on Chrome (or Firefox, or Opera, or Edge) are good people is not reason enough to passively accept every decision they make.

Alex called me out for not getting in touch with him directly about the Chrome team’s future plans with URLs, but again, that kind of rough consensus to do something is trumped by running code. Also, I did talk to Chrome people—this all came out of a discussion with Paul Kinlan. I don’t know who’s who in the company’s political hierarchy and I don’t think I should need an org chart to give feedback to Google (or Mozilla, or Opera, or Microsoft).

You’ll notice that I didn’t include Apple there. I don’t hold them to the same high standard. As it turns out, I know some very good people at Apple working on WebKit and Safari. As individuals, they care about the web. But as a company, Apple has shown indifference towards web developers. As Remy put it:

Even getting the hint of interest from Apple is a process of dumpster-diving the mailing lists scanning for the smallest hint of interest.

With that in mind, I completely understand Alex’s frustration with my post on “regressive” web apps. Although I intended it as a push towards making better progressive web apps, I can see how it could be taken as confirmation by those who think that progressive web apps aren’t worth investing in. Apple, for example. As it is, they’ll have to be carried kicking and screaming into adding support for Service Workers, manifest files, and other building blocks. From the reaction to my post from at least one WebKit developer on Twitter, not only did I fail to get across just how important the technologies behind progressive web apps are, I may have done more harm than good, giving ammunition to sceptics.

Still, I hope that most people took my words in the right spirit, like Addy:

We should push them to do much better. I’ll file bugs. Per @adactio post, can’t forget the ‘Progressive’ part of PWAs

Seeing that reaction makes me feel good …but seeing Alex’s reaction makes me feel bad. Very bad. I’m genuinely sorry that I made Alex feel that way. It wasn’t my intention but, well …good intentions are not enough.

I’ve been looking back at what I wrote, trying to see it through Alex’s eyes, looking for the parts that could be taken as a personal attack:

Chrome developers have decided that displaying URLs is not “best practice” … To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief. … Withholding the “add to home screen” prompt like that has a whiff of blackmail about it. … This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Some pretty strong words there. I stand by them, but the tone is definitely strident.

When we criticise something—a piece of software, a book, a website, a film, a piece of music—it’s all too easy to forget that there are real people behind it. But that isn’t the case here. I know that there are real people working on Chrome, because I know quite a few of those people. I also know that their intentions are good. That’s not a reason for me to remain silent—that’s a reason for me to speak up.

If I had known that my post was going to upset Alex, would I have still written it? That’s a tough one. On the one hand, this is a topic I care passionately about. I think it’s vital that we don’t compromise on the very things that make the web great. On the other hand, who knows if what I wrote will make the slightest bit of difference? In which case, I got the catharsis of getting it off my chest but at the price of upsetting somebody I respect. That price feels too high.

I love the fact that I can publish whatever I want on my own website. It can be a place for me to be enthusiastic about things that excite me, and a place for me to rant about things that upset me. I estimate that the enthusiastic stuff outnumbers the ranty stuff by about ten to one, but negativity casts a disproportionately large shadow.

I need to get better at tempering my words. Not that I’m going to stop criticising bad decisions when I see them, but I need to make my intentions clearer …because just having good intentions is not enough. Throughout this post, I’ve mentioned repeatedly how much I respect the people I know working on the Chrome team. I should have said that in my original post.

Regressive Web Apps

There were plenty of talks about building for the web at this year’s Google I/O event. That makes a nice change from previous years when the web barely got a look in and you’d be forgiven for thinking that Google I/O was an event for Android app developers.

This year’s event showed just how big Google is, and how it doesn’t have one party line when it comes to the web and native. At the same time as there were talks on Service Workers and performance for the web, there was also an unveiling of Android Instant Apps—a full-frontal assault on the web. If you thought it was annoying when websites door-slammed you with intrusive prompts to install their app, just wait until they don’t need to ask you anymore.

Peter has looked a bit closer at Android Instant Apps and I think he’s as puzzled as I am. Either they are sandboxed to have similar permission models to the web (in which case, why not just use the web?) or they allow more access to native APIs in which case they’re a security nightmare waiting to happen. I’m guessing it’s probably the former.

Meanwhile, a different part of Google is fighting the web’s corner. The buzzword du jour is Progressive Web Apps, originally defined by Alex as:

  • Responsive
  • Connectivity independent
  • App-like-interactions
  • Fresh
  • Safe
  • Discoverable
  • Re-engageable
  • Installable
  • Linkable

A lot of those points are shared by good native apps, but the first and last points in that list are key features of the web: being responsive and linkable.

Alas many of the current examples of so-called Progressive Web Apps are anything but. Flipkart and The Washington Post have made Progressive Web Apps that are getting lots of good press from Google, but are mobile-only.

Looking at most of the examples of Progressive Web Apps, there’s an even more worrying trend than the return to m-dot subdomains. It looks like most of them are concentrating so hard on the “app” part that they’re forgetting about the “web” bit. That means they’re assuming that modern JavaScript is available everywhere.

Alex pointed to shop.polymer-project.org as an example of a Progressive Web App that is responsive as well as being performant and resilient to network failures. It also requires JavaScript (specifically the Polymer polyfill for web components) to render some text and images in a browser. If you’re using the “wrong” browser—like, say, Opera Mini—you get nothing. That’s not progressive. That’s the opposite of progressive. The end result may feel very “app-like” if you’re using an approved browser, but throwing the users of other web browsers under the bus is the very antithesis of what makes the web great. What does it profit a website to gain app-like features if it loses its soul?

I’m getting very concerned that the success criterion for Progressive Web Apps is changing from “best practices on the web” to “feels like native.” That certainly seems to be how many of the current crop of Progressive Web Apps are approaching the architecture of their sites. I think that’s why the app-shell model is the one that so many people are settling on.

Personally, I’m not a fan of the app-shell model. I feel that it prioritises exactly the wrong stuff—the interface is rendered quickly while the content has to wait. It feels weirdly like a hangover from Appcache. I also notice it being used as a get-out-of-jail-free card, much like the ol’ “Single Page App” descriptor; “Ah, I can’t do progressive enhancement because I’m building an app shell/SPA, you see.”

But whatever. That’s just, like, my opinion, man. Other people can build their app-shelled SPAs and meanwhile I’m free to build websites that work everywhere, and still get to use all the great technologies that power Progressive Web Apps. That’s one of the reasons why I’ve been quite excited about them—all the technologies and methodologies they promote match perfectly with my progressive enhancement approach: responsive design, Service Workers, good performance, and all that good stuff.

I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content, and don’t throw away responsiveness in favour of a return to device-specific silos. But I’m not holding my breath. People seem to be so caught up in the attempt to get native-like functionality that they’re willing to give up the very things that make the web great.

For example, I’ve seen people use a meta viewport declaration to disable pinch-zooming on their sites. As justification they point to the fact that you can’t pinch-zoom in most native apps, therefore this web-based app should also prohibit that action. The inability to pinch-zoom in native apps is a bug. By also removing that functionality from web products, people are reproducing unnecessary bugs. It feels like a cargo-cult approach to building for the web: slavishly copy whatever native is doing …because everyone knows that native apps are superior to websites, right?

Here’s another example of the cargo-cult imitation of native. In your manifest JSON file, you can declare a display property. You can set it to browser, standalone, or fullscreen. If you set it to standalone or fullscreen then, when the site is launched from the home screen, it won’t display the address bar. If you set the display property to browser, the address bar will be visible on launch. Now, personally I like to expose those kind of seams:

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

Other people disagree. They think it makes more sense to hide the URL. They have a genuine concern that users will be confused by launching a website from the home screen in a browser (presumably because the user’s particular form of amnesia caused them to forget how that icon ended up on their home screen in the first place).

Fair enough. We’ll agree to differ. They can set their display property how they want, and I can set my display property how I want. It’s a big web after all. There’s no one right or wrong way to do this. That’s why there are multiple options for the values.

Or, at least, that was the situation until recently…

Remember when I wrote about how Chrome on Android will show an “add to home screen” prompt if your Progressive Web App fulfils a few criteria?

  • It is served over HTTPS,
  • it has a manifest JSON file,
  • it has a Service Worker, and
  • the user visits it a few times.

Well, those goalposts have moved. There is now a new criterion:

  • Your manifest file must not contain a display value of browser.

Chrome developers have decided that displaying URLs is not “best practice”. It was filed as a bug.

A bug.

Displaying URLs.

A bug.

I’m somewhat flabbergasted by this. The killer feature of the web—URLs—are being treated as something undesirable because they aren’t part of native apps. That’s not a failure of the web; that’s a failure of native apps.

Now, don’t get me wrong. I’m not saying that everyone should be setting their display property to browser. That would be far too prescriptive. I’m saying that it should be a choice. It should depend on the website. It should depend on the expectations of the users of that particular website. To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief.

I wouldn’t even have noticed this change of policy if it weren’t for the newly-released Lighthouse tool for testing Progressive Web Apps. The Session gets a good score but under “Best Practices” there was a red mark against the site for having display: browser. Turns out that’s the official party line from Chrome.

Just to clarify: you can have a site that has literally no HTML or turns away entire classes of devices, yet officially follows “best practices” and gets rewarded with an “add to home screen” prompt. But if you have a blazingly fast responsive site that works offline, you get nothing simply because you don’t want to hide URLs from your users:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Stuart argues that this is a paternal decision:

The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar.

I think there’s something to that, but digging deeper, developers and designers don’t make decisions like that in isolation. They’re generally thinking about what’s best for users. So, yes, absolutely, different apps will have different display properties, but that shouldn’t be down to the belief system of the developer; it should be down to the needs of the users …the specific needs of the specific users of that specific app. For the Chrome team to come down on one side or the other and arbitrarily declare that one decision is “correct” for every single Progressive Web App that is ever going to be built …that’s a political decision. It kinda feels like an abuse of power to me. Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

The other factors that contribute to the “add to home screen” prompt are pretty uncontroversial:

  • Sites should be served over a secure connection: that’s pretty hard to argue with.
  • Sites should be resilient to network outages: I don’t think anyone is going to say that’s a bad idea.
  • Sites should provide some metadata in manifest file: okay, sure, it’s certainly not harmful.
  • Sites should obscure their URL …whoa! That feels like a very, very different requirement, one that imposes one particular opinion onto everyone who wants to participate.

This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Up until now I’ve been a big fan of Progressive Web Apps. I understood them to be combining the best of the web (responsiveness, linkability) with the best of native (installable, connectivity independent). Now I see that balance shifting towards the native end of the scale at the expense of the web’s best features. I’d love to see that balance restored with a little less emphasis on the “Apps” and a little more emphasis on the “Web.” Now that would be progressive.

Home screen

Remy posted a screenshot to Twitter last week.

A screenshot of adactio.com on an Android device showing an Add To Home Screen prompt.

That “Add To Home Screen” dialogue is not something that Remy explicitly requested (though, of course, you can—and should—choose to add adactio.com to your home screen). That prompt appears in Chrome on Android as the result of a fairly simple algorithm based on a few factors:

  1. The website is served over HTTPS. My site is.
  2. The website has a manifest file. Here’s my JSON manifest file.
  3. The website has a Service Worker. Here’s my site’s Service Worker script (although a little birdie told me that the Service Worker script can be as basic as a blank file).
  4. The user visits the website a few times over the course of a few days.

I think that’s a reasonable set of circumstances. I particularly like that there is no way of forcing the prompt to appear.

There are some carrots in there: Want to have the user prompted to add your site to their home screen? Well, then you need to be serving on a secure connection, and you’d better get on board that Service Worker train.

Speaking of which, after I published a walkthrough of my first Service Worker, I got an email bemoaning the lack of browser support:

I was very much interested myself in this topic, until I checked on the “Can I use…” site the availability of this technology. In one word “limited”. Neither Safari nor IOS Safari support it, at least now, so I cannot use it for implementing mobile applications.

I don’t think this is the right way to think about Service Workers. You don’t build your site on top of a Service Worker—you add a Service Worker on top of your existing site. It has been explicitly designed that way: you can’t make it the bedrock of your site’s functionality; you can only add it as an enhancement.

I think that’s really, really smart. It means that you can start implementing Service Workers today and as more and more browsers add support, your site will appear to get better and better. My site worked fine for fifteen years before I added a Service Worker, and on the day I added that Service Worker, it had no ill effect on non-supporting browsers.

Oh, and according to the Webkit five year plan, Service Worker support is on its way. This doesn’t surprise me. I can’t imagine that Apple would let Google upstage them for too long with that nice “add to home screen” flow.

Alas, Mobile Safari’s glacial update cycle means that the earliest we’ll see improvements like Service Workers will probably be September or October of next year. In the age of evergreen browsers, Apple’s feast-or-famine approach to releasing updates is practically indistinguishable from stagnation.

Still, slowly but surely, game-changing technologies are landing in browsers. At the same time, the long-term problems with betting on native apps are starting to become clearer. Native apps are still ahead of what can be accomplished on the web, but it was ever thus:

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

It’s interesting to see how the web could take the desirable features of native—offline support, smooth animations, an icon on the home screen—without sacrificing the strengths of the web—linking, responsiveness, the lack of App Store gatekeepers. That kind of future is what Alex is calling progressive apps:

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Flipkart recently launched something along those lines, although it’s somewhat lacking in the “enhancement” department; the core content is delivered via JavaScript—a fragile approach.

What excites me is the prospect of building services that work just fine on low-powered devices with basic browsers, but that also take advantage of all the great possibilities offered by the latest browsers running on the newest devices. Backwards compatible and future friendly.

And if that sounds like a naïve hope, then I humbly suggest that Service Workers are a textbook example of exactly that approach.

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.


It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.


No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.


If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.


In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:


Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>


That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.


The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"


When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.


There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.


When I was talking about Huffduffer, I mentioned that I don’t have any analytics set up for the site:

To be honest, I’m okay with that—one of the perks of having a personal project is that only metric that really matters is your own satisfaction.

For a while, I did have Google Analytics set up on The Session. But I started to feel a bit uncomfortable about willingly opening up a wormhole between my site and the Google mothership. It bothered me that Adblock Plus would show that one ad had been blocked on the site. There are no ads on the site, but the presence of the Google Analytics code was providing valuable information to Google—and its advertiser customer base—so I can understand why it gets flagged up like any other unwanted tracking.

Theoretically, users have a way of opting out of this kind of tracking by switching on the Do Not Track header (if it isn’t switched on by default). Looking at the default JavaScript code that Google provides for setting up Google Analytics, I don’t see any mention of navigator.doNotTrack.

Now, it may well be that Google sniffs for that header (and abandons any tracking) when its server is pinged via the analytics code, but there’s no way to tell from this side of the Googleplex. I certainly don’t see any mention of it in the JavaScript that gets inserted into our web pages.

I was wondering whether it makes sense to explictly check for the doNotTrack header before opening up that connection to google-analytics.com via a generated script element.

So if the current code looks like:

// generate a script element
// point it to google-analytics.com

Would it make sense to wrap it with some kind of test for navigator.doNotTrack:

if (!navigator.doNotTrack || navigator.doNotTrack != 'yes') {
// generate a script element
// point it to google-analytics.com

For the love of mercy, don’t actually use that code—it’s completely untested and probably causes more harm than good. But you can see the idea that I’m trying to get at, right? Google Analytics most definitely counts as tracking so it seems like the ideal use-case for Do Not Track.

It raises a few questions:

  1. Is anyone doing this already? It might well be that the answer is “no”, not because of any reluctance to respect user preferences but because the doNotTrack spec is very much in flux.
  2. Would you consider doing this?
  3. If you were to do this, could you foresee getting pushback from within your own company?

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Slow glass

The day that Opera announced that it was changing its browser to use the WebKit rendering engine, I was contacted by .net magazine for my opinion on the move. My response was:

I have no opinion on this right now.

Frankly, I’m always quite amazed at how others can form opinions so quickly. Sometimes opinions are formed and set on technologies before they’re even out and about in the world: little printers, Apple watches, Google glasses…

The case against Google Glass seemed to be a done deal after Mark Hurst published The Google Glass feature no one is talking about:

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them.

It’s a very persuasive piece of writing and it certainly gave me food for thought. Then Eric wrote Glasshouse:

Our youngest tends to wake up fairly early in the morning, at least as compared to his sisters, and since I need less sleep than Kat I’m usually the one who gets up with him. This morning, he put away a box he’d just emptied of toys and I told him, “Well done!” He turned to me, stuck his hand up in the air, and said with glee, “Hive!”

I gave him the requested high-five, of course, and then another for being proactive. It was the first time he’d ever asked for one. He could not have looked more pleased with himself.

And I suddenly realized that I wanted to be able to say to my glasses, “Okay, dump the last 30 seconds of livestream to permanent storage.”

Now I’ve got another interesting, persuasive perspective on the yet-to-be-released product.

Just as we can be very quick to label websites and social networks as dead (see Flickr), I worry if we’re often too quick to look for the worst aspects in any new technology.

Natalia has written a great piece called No, let’s not stop the cyborgs in reaction to the over-the-top Luddism of the Stop The Cyborgs movement:

Healthy criticism and skepticism towards technologies and their impact on society is necessary, but framing it in a way that discredits all people with body and sense enhancing technologies is othering.

Now we get in to the question of whether technology can be inherently “good” or “bad.” Kevin Kelly avoids such loaded terms, but he does ascribe some kind of biased trajectory to our tools in his book What Technology Wants.

Natalia writes:

It’s also important to remember that technologies themselves aren’t always ethically questionable. It’s what we do with them that can be positive or contribute to suffering and misery. Sometimes the same technology can be used to help people and to simultaneously ruin lives for profit.

A fair point, but one that is most commonly used by the pro-gun lobby—proponents of a technology that I personally find very hard to view as neutral.

But the point remains: we seem to have a natural impulse to immediately think of the worst that could happen with any new technology (though I’m just as impatient with techno-utopians as I am with techno-dystopians). I really enjoy watching Black Mirror but its central question grows wearisome after a while. That question is “What’s the worst that could happen?”

I am, once more, reminded of the danger of self-fulfilling prophesies when it comes to seeing the worst in technologies like Google Glass. As Matt Webb’s algorithm puts it:

It’s not the end of privacy because it’s all newly visible, it’s the end of privacy because it looks like it’s the end of privacy because it’s all newly visible.

I was chatting with fellow sci-fi fan Jon Tan about Kim Stanley Robinson, whose work I (shamefully) haven’t dived into yet. Jon told me that a good starting point would be the Three Californias trilogy. It consists of one utopia, one dystopia, and one apocalypse. I like the sound of that.

Those who take an anti-technology stance, or at least an overly-negative stance on technology, are often compared to the Amish. But as Stewart Brand is quick to point out, the Amish don’t reject technology—instead, they take their time in deciding whether a new technology will, on balance, be better or worse for their society in the long term:

The Amish seek to master technology rather than become its slave.

I think that techno-utopians and -dystopians alike can appreciate that.

When is a link not a link?

Google has a web page for its Chrome browser. This page provides information about the browser, but its primary purpose—its call-to-action, if you will—is to encourage you to download the browser. Hence the nice big blue button-like link that says “Download Chrome.”

Tech bloggy publication thingy The Next Web posted some words pointing out that, for a while there, the link wasn’t working. At all. There was no way to download Chrome from the page created for the purpose of letting you download Chrome.

Download Chrome

The problem was that the link isn’t a real link. I mean, technically it’s an A element, and it does have an href attribute …but the value of that attribute isn’t a resource (like say, an installer for a web browser, or terms of service for downloading a browser). Instead it uses the JavaScript pseudo-protocol—meaning: not actually a protocol—to point to void(0).

<a class="button eula-download-button" data-g-event="cta" data-g-label="download-chrome" href="javascript:void(0)">Download Chrome</a>

So when there was a problem with the JavaScript, the link stopped working:

Uncaught TypeError: Cannot read property 'Installer' of undefined

HTML has a very fault-tolerant way of handling errors: if it sees an element or attribute it doesn’t understand, it just ignores it—it doesn’t break the page, it just moves on to then next element. Likewise with CSS. Unknown selectors, properties, or values are simply ignored. Not so with JavaScript. A syntax error stops execution of the script. That’s actually quite handy when you’re trying to debug your code, but no so handy when it’s out on the web.

Given the brittleness of JavaScript’s error-handling, it seems unwise to entrust the core functionality of your page/app/site/whatever to the most fragile part of the front-end stack …especially when that same functionality is provided by a native HTML element.

I don’t want to pick on Google in particular here—there are far too many other sites exhibiting the same kind of over-engineering:

<a href="javascript:void(0)">

<a href="#" onclick="...">

<span class="button" onclick="...">

<div class="link" onclick="...">

By all means add all the JavaScript whizzbangery to your site that you want. But please make sure you’re adding it on a solid base of working markup. Progressive enhancement is your friend. Just like any good friend, it will help you out when unexpected bad things happen.

South by CSS

South by Southwest has become a vast, sprawling festival with a preponderance of panels pitched at marketers, start-ups and people that use the words “social media” in their job title without irony. But there were also some great design and development talks if you looked for them.

Samantha gave a presentation on style tiles, which I unfortunately missed but I’ll be eagerly awaiting the release of the audio. I also missed some good meaty JavaScript talks but I did manage to make it along to Jen’s excellent introduction to HTML5 APIs.

Andy’s talk on CSS best practices was one of the best presentations I’ve ever seen. He did a fantastic job of tackling some really important topics. It’s a presentation (and a presenter) that deserves a wider audience, so if you’re involved in putting together the line-up for any front-end conferences, I highly recommend that you nab him.

Divya put together an absolutely killer panel called CSS.next, all about how CSS gets specced and shipped, and what’s coming down the line. All of the panelists were smart, articulate, and well-informed. The panel was very enlightening, as well as being thoroughly enjoyable.

And then there was the Browser Wars panel.

This is something of a SXSW tradition. Arun assembles a line-up of representatives from browser makers—Mozilla, Google, Microsoft, and Opera—and then peppers them with some hardball questions. Apple is invited to send a representative every year, and every year, Apple declines.

There was no shortage of contentious topics this year. The subject of Google Dart was raised (“Good luck with that,” said Brendan). There was also plenty of discussion about the recent DRM proposal submitted to the HTML working group. There was a disturbing level of agreement amongst all the panelists that some form of DRM for video was needed because, hey, that’s just the way things go…

As an aside, I must say I found the lack of imagination on display to be pretty disheartening. Two years ago, Chris was on the Browser Wars panel representing Microsoft, defending the EOT format because, hey, that’s just the way things go. Without some form of DRM, he argued, we couldn’t have fonts on the web. Well, the web found a way. Now Chris is representing Google but the argument remains the same. DRM, so the argument goes, is the only way we’ll get video on the web because that’s what the “rights holders” demand. And yet, if you are a photographer, no such special consideration is afforded to you. The img element has no DRM and people are managing just fine, thank you. Video, apparently, is a special case …just like fonts. ahem


The subject of vendor prefixes also came up. Specifically, the looming prospect of non-webkit browsers parsing -webkit prefixed properties was raised.

I saw a pattern amongst all three subjects: the DRM proposal, Dart, and browsers implementing another browser’s vendor prefix. All three proposals were made to address a genuine problem. The proposals all suffer from varying degrees of batshit craziness but they certainly galvanised a lot of discussion.

For example, Brendan said that while Google Dart may not stand a hope in hell of supplanting JavaScript, some of the ideas it contains may well end up influencing the development of ECMAScript.

Similarly, Mozilla’s plan for vendor-prefixing certainly caused all parties to admit the problem: the W3C was moving too slow; Apple should have submitted proprietary properties for standardisation sooner; Mozilla, Microsoft, and Opera should have been innovating faster; and web developers should have been treating vendor-prefixed properties as experimental features, not stable parts of a spec.

So the proposal to do something batshit crazy and implement -webkit-prefixed CSS properties has actually had some very positive effects …but there’s no reason to actually go ahead and do it!

I tried to make this point during the audience participation part of the panel, but it was like banging my head against a brick wall. Chaals kept repeating the problem case, but I wasn’t disputing the problem; I was trying to point out that the proposed solution wouldn’t fix the problem.

It was a classic case of the same kind of thinking we saw in the SOPA proposal:

  1. Something must be done!
  2. Implementing -webkit prefixes is something.
  3. Something has been done.

The problem is that it won’t work. Adding “like Webkit” to the user-agent string will probably have much more of an effect and frankly, I don’t care if any of the browsers do that. At this point, a little bit more pissing into the bloated cesspool of user-agent strings is hardly going to matter. A browser’s user-agent string isn’t an identifier, it’s a reverse-chronological history of the web. Why not update the history booklet to include the current predilection amongst developers for Webkit browsers on mobile?

But implementing -webkit vendor prefixes? Pointless! If a developer is only building and testing their sites for one class of device or browser, simply implementing that browser’s prefixed CSS is just putting a band aid on a decapitation.

So I was kind of hoping that Mozilla would just come right out and say that maybe they wouldn’t actually go ahead and do this but hey, look at all the great discussion it generated (just like Dart, just like the DRM proposal). But sadly, no. Brendan categorically stated that the proposal was not presented in order to foment discussion. And in follow-up tweets, he wrote that he actually expected it to level the mobile browser playing field. That’s an admirably optimistic viewpoint but it’s sadly self-delusional.

And what will happen when implementing -webkit prefixes fails to level the playing field? We’ll be left with deliberately broken browsers.

Once something ships in a browser, it’s very, very hard to ever remove it. During the Dart discussion, Chris talked about the possibility of removing Dart from Chrome if developers don’t take to it. Turning to the Microsoft representative he asked rhetorically, “I mean, do you guys still ship VBScript?”

The answer?


Digital Deathwatch

The Deatchwatch page on the Archive Team website makes for depressing reading, filled as it is with an ongoing list of sites that are going to be—or have already been—shut down. There are a number of corporations that are clearly repeat offenders: Yahoo!, AOL, Microsoft. As Aaron said last year when speaking of Museums and the Web:

Whether or not they asked to be, entire communities are now assuming that those companies will not only preserve and protect the works they’ve entrusted or the comments and other metadata they’ve contributed, but also foster their growth and provide tools for connecting the threads.

These are not mandates that most businesses take up willingly, but many now find themselves being forced to embrace them because to do otherwise would be to invite a betrayal of the trust of their users, from which they might never recover.

But occasionally there is a glimmer of hope buried in the constant avalanche of shit from these deletionist third-party custodians of our collective culture. Take Google Video, for example.

Earlier this year, Google sent out emails to Google Video users telling them the service was going to be shut down and their videos deleted as of April 29th. There was an outcry from people who rightly felt that Google were betraying their stated goal to organize the world‘s information and make it universally accessible and useful. Google backtracked:

Google Video users can rest assured that they won’t be losing any of their content and we are eliminating the April 29 deadline. We will be working to automatically migrate your Google Videos to YouTube. In the meantime, your videos hosted on Google Video will remain accessible on the web and existing links to Google Videos will remain accessible.

This gives me hope. If the BBC wish to remain true to their mission to enrich people’s lives with programmes and services that inform, educate and entertain, then they will have to abandon their plan to destroy 172 websites.

There has been a stony silence from the BBC on this issue for months now. Ian Hunter—who so proudly boasted of the planned destruction—hasn’t posted to the BBC blog since writing a follow-up “clarification” that did nothing to reassure any of us.

It could be that they’re just waiting for a nice quiet moment to carry out the demolition. Or maybe they’ve quietly decided to drop their plans. I sincerely hope that it’s the second scenario. But, just in case, I’ve begun to create my own archive of just some of the sites that are on the BBC’s death list.

By the way, if you’re interested in hearing more about the story of Archive Team, I recommend checking out these interviews and talks from Jason Scott that I’ve huffduffed.

Going Postel

I wrote a little while back about my feelings on hash-bang URLs:

I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness…

Fortunately, Mike Davies is more articulate than I. He’s written a detailed account of breaking the web with hash-bangs.

It would appear that hash-bang usage is on the rise, despite the fact that it was never intended as a long-term solution. Instead, the pattern (or anti-pattern) was intended as a last resort for crawling Ajax-obfuscated content:

So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

Mike goes into detail on the Gawker outage that was a direct result of its “sites” being little more than single pages that require JavaScript to access anything.

I’m always surprised when I come across as site that deliberately chooses to make its content harder to access.

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of .

This is no accident. The web stack is rooted in . If you serve an HTML document to a browser, and that document contains some tags or attributes that the browser doesn’t understand, the browser will simply ignore them and render the document as best it can. If you supply a style sheet that contains a selector or rule that the browser doesn’t recognise, it will simply pass it over and continue rendering.

In fact, the most brittle part of the stack is JavaScript. While it’s far looser and more forgiving than many other programming languages, it’s still a programming language and that means that a simple typo could potentially cause an entire script to fail in a browser.

That’s why I’m so surprised that any front-end engineer would knowingly choose to swap out a solid declarative foundation like HTML for a more brittle scripting language. Or, as Simon put it:

Gizmodo launches redesign, is no longer a website (try visiting with JS disabled): http://gizmodo.com/

Read Mike’s article, re-read this article on URL design and listen to what John Resig has to say in this interview .


In the latest ruckus around search results, a spokesman for Google proudly declaims that the company is sticking to its principles:

Google views the integrity of our search results as an extremely important priority.

Accordingly, we do not remove a page from our search results simply because its content is unpopular or because we receive complaints concerning it.

Oh, really?

If you change the word “complaint” to “DMCA takedown notice” then it’s a different story.

So the lesson is: complaining about the offensive nature of a search result will get you nowhere but sending a spurious legal claim will result in immediate action.

Burge Pitch Torrent

entreats us to:

Never attribute to malice that which can be adequately explained by stupidity.

While states that:

Any sufficiently advanced technology is indistinguishable from magic.

Mash them up and that leaves us with :

Any sufficiently advanced incompetence is indistinguishable from malice.

I have experienced a textbook case of extremely advanced incompetence.

When I wrote about the shenanigans of Perfect Pitch, in which a DMCA claim was used to remove a perfectly innocuous discussion on The Session from Google’s search index, I pondered about whether malice could be ascribed:

Could it be that the owner of perfectpitch.com sent a DMCA complaint to Google simply because another site was getting higher rankings for the phrase “perfect pitch”? If so, then that’s a whole new level of SEO snake-oilery.

In turns out that the correct cause is incompetence at a stunning level. I got an email from Gary Boucherle at perfectpitch.com who explained:

Periodically we’ve contacted Google to submit the following complaint:

We believe our copyrighted works have been illegally copied and made available for free download at the web sites listed below …

The following URL was one of hundreds of URLS (mostly torrent sites) found with the Google search terms Burge+Pitch+Torrent:


While we try to check every URL to make sure it either contains torrents, or is a torrent file sharing site (not the case with this site), it was included with our complaint inadvertently.

So here’s what’s happening: A company is doing a search for a phrase on Google, making a list of all the URLs returned by that search and then submitting that list to Google as part of a DMCA claim. Google then removes all those URLs from its search index without verifying any infringement.

I subsequently had a phone conversation with Gary and he was quite contrite about his actions— although he did try to claim that the mere mention of torrents in an online discussion might be justification for a take down (a completely indefensible attitude).

The more I talked to him, the more I realised that he simply had no idea about the DMCA. He was completely oblivious to the potential consequences of his actions were he to lose a counter-claim in court.

Gary Bourcherle abused a piece of extremely poor legislation in a scattergun approach without even understanding what he was doing. It’s like putting guns into the hands of small children.

Well, Gary is very sorry now and promises he won’t do it again. He is going to contact Google and ask them to reinstate the discussion on The Session in the search index.

Here is an official statement of apology, sent by email for redistribution here or anywhere else (try to ignore the bits where The Session is referred to as “a blog”):

To All Readers:

Here at PerfectPitch.com we made a big mistake.

We instructed Google to block a blog site managed by Jeremy Keith, citing that they were in violation of the Digital Millennium Copyrights Act (DMCA). As per our request, Google did indeed remove this page from their search listings.

We wish to formally apologize to Mr. Keith and his bloggers for this mistake, for which we are deeply regretful.

Please understand that we had no intention whatsoever to suppress the speech on Mr. Keith’s page. Please know that we are ardent supporters and advocates of free speech for everyone.

We recognize this was a careless error, and there is really no excuse for this. Nevertheless, please permit us a moment to explain.

Here’s what happened:

We were actually submitting to Google a list of sites that were illegally distributing copies of our copyrighted intellectual property. We of course have every right to request that Google have these sites removed from their search engine results because we believe these sites violate the DMCA, which prohibits the illegal distribution of copyrighted materials over the internet.

To our shock and horror, an employee of ours mistakenly included Mr. Keith’s site in our list, merely because it made a reference to illegal copies of our course. Naturally, this is not grounds for removal of this page at Google. Our intention was only to remove actual pages where the course is being illegally distributed, and not any pages of free speech, such as Mr. Keith’s blog. This was a misjudgment and error on our employee’s side, and on behalf of our company, we sincerely apologize.

This event has never happened to us before when reporting illegal distribution of our materials. Please rest assured that we will redouble our efforts to ensure this never happens again.

We have requested that Google immediately reinstate this page in their search results, along with our apology to Google as well.

If we have offended any potential musicians who wished to purchase our best-selling, university verified ear training methods, again, we sincerely apologize. To make it up to you, we would invite you to try our courses at a substantial discount not offered to the general public, valid until the end of this month. Please go here to retrieve your special offer with our apologies:


Again, please accept our sincere regrets for this goof.

Happy blogging, everyone.

Sincerely yours,
Gary Boucherle

Contact: http://www.PerfectPitch.com/contactus.htm

Apology accepted. Now don’t do it again.

No word from Google on their “obey first, ask questions never” approach to DMCA claims.

And if you thought the DMCA was a bad piece of legislation, just wait till ACTA arrives.

Now, if you’ll excuse me, I’m off to change my email signature to “Burge Pitch Torrent.” It has a nice ring to it, don’t you think? Like “”.