Tags: urls



Regression toward being mean

I highly recommend Remy’s State Of The Gap post—it’s ace. He summarises it like this:

I strongly believe in the concepts behind progressive web apps and even though native hacks (Flash, PhoneGap, etc) will always be ahead, the web, always gets there. Now, today, is an incredibly exciting time to be build on the web.

I agree completely. That might sound odd after I wrote about Regressive Web Apps, but it’s precisely because I’m so excited by the technologies behind progressive web apps that I think it’s vital that we do them justice. As Remy says:

Without HTTPS and without service workers, you can’t add to homescreen. This is an intentionally high bar of entry with damn good reasons.

When the user installs a PWA, it has to work. It’s our job as web developers to provide the most excellent experience for our users.

It has to work.

That’s why I don’t agree with Dion’s metrics for what makes a progressive web app:

If you deliver an experience that only works on mobile is that a PWA? Yes.

I think it’s important to keep quality control high. Being responsive is literally the first item in the list of qualities that help define what a progressive web app is. That’s why I wrote about “regressive” web apps: sites that are supposed to showcase what we can do but instead take a step backwards into the bad old days of separate sites for separate device classes: washingtonpost.com/pwa, m.flipkart.com, lite.5milesapp.com, app.babe.co.id, m.aliexpress.com.

A lot of people on Twitter misinterpreted my post as saying “the current crop of progressive web apps are missing the mark, therefore progressive web apps suck”. What I was hoping to get across was “the current crop of progressive web apps are missing the mark, so let’s make better ones!”

Now, I totally understand that many of these examples are a first stab, a way of testing the waters. I absolutely want to encourage these first attempts and push them further. But I don’t think that waiving the qualifications for progressive web apps helps achieves that. As much as I want to acknowledge the hard work that people have done to create those device-specific examples, I don’t think we should settle for anything less than high-quality progressive web apps that are as much about the web as they are about apps.

Simply put, in this instance, I don’t think good intentions are enough.

Which brings me to the second part of Regressive Web Apps, the bit about Chrome refusing to show the “add to home screen” prompt for sites that want to have their URL still visible when launched from the home screen.

Alex was upset by what I wrote:

if you think the URL is going to get killed on my watch then you aren’t paying any attention whatsoever.

so, your choices are to think that I have a secret plan to kill URLs, or conclude I’m still Team Web.

I’m galled that anyone, particularly you @adactio, would think the former…but contrarianism uber alles?

I am very, very sorry that I upset Alex like this.

But I stand by my criticism of the actions of the Chrome team. Because good intentions are not enough.

I know that Alex is huge fan of URLs, and of the web. Heck, just about everybody I know that works on Chrome in some capacity are working for the web first and foremost: Alex, Jake, various and sundry Pauls. But that doesn’t mean I’m going to stay quiet when I see the Chrome team do something I think is bad for the web. If anything, it’s precisely because I hold them to a high standard that I’m going to sound the alarm when I see what I consider to be missteps.

I think that good people can make bad decisions with the best of intentions. Usually it involves long-term thinking—something I think is very important. “The ends justify the means” is a way of thinking that can create a lot of immediate pain, even if it means a better future overall. Balancing those concerns is front and centre of the Chromium project:

As browser implementers, we find that there’s often tension between (a) moving the web forward and (b) preserving compatibility. On one hand, the web platform API surface must evolve to stay relevant. On the other hand, the web’s primary strength is its reach, which is largely a function of interoperability.

For example, when Alex talks of the Web Component era as though it were an inevitability, I get nervous. Not for myself, but for the millions of Opera Mini users out there. How do we get to a better future without leaving anyone behind? Or do we sacrifice those people for the greater good? Do the needs of the many outweigh the needs of the few? Do the ends justify the means?

Now, I know for a fact that the end-game that Alex is pursuing with web components—and the extensible web manifesto in general—is a more declarative web: solutions that first get tackled as web components end up landing in browsers. But to get there, the solutions are first created using modern JavaScript that simply doesn’t work everywhere. Is that the price we’re going to have to pay for a better web?

I hope not. I hope we can find ways to have our accessible cake and eat it too. But it will be really, really hard.

Returning to progressive web apps, I was genuinely shocked and appalled at the way that the Chrome team altered the criteria for the “add to home screen” prompt to discourage exposing URLs. I was also surprised at how badly the change was communicated—it was buried in a bug report that five people contributed to before pushing the change. I only found out about it through a conversation with Paul Kinlan. Paul encouraged me to give feedback, and that’s what I did on my website, just like Stuart did on his.

Of course the Chrome team are working on ways of exposing URLs within progressive web apps that are launched in from the home screen. Opera are working on it too. But it’s a really tricky problem to solve. It’s not enough to say “we’ll figure it out”. It’s not enough to say “trust us.”

I do trust the people I know working on Chrome. I also trust the people I know at Mozilla, Opera and Microsoft. That doesn’t mean I’m going to let their actions go unquestioned. Good intentions are not enough.

As Alex readily acknowledges, the harder problem (figuring out how to expose URLs) should have been solved first—then the change to the “add to home screen” metrics would be uncontentious. Putting the cart before the horse, discouraging display:browser now, while saying “trust us, we’ll figure it out”, is another example of saying the ends justify the means.

But the stakes are too high here to let this pass. Good intentions are not enough. Knowing that the people working on Chrome (or Firefox, or Opera, or Edge) are good people is not reason enough to passively accept every decision they make.

Alex called me out for not getting in touch with him directly about the Chrome team’s future plans with URLs, but again, that kind of rough consensus to do something is trumped by running code. Also, I did talk to Chrome people—this all came out of a discussion with Paul Kinlan. I don’t know who’s who in the company’s political hierarchy and I don’t think I should need an org chart to give feedback to Google (or Mozilla, or Opera, or Microsoft).

You’ll notice that I didn’t include Apple there. I don’t hold them to the same high standard. As it turns out, I know some very good people at Apple working on WebKit and Safari. As individuals, they care about the web. But as a company, Apple has shown indifference towards web developers. As Remy put it:

Even getting the hint of interest from Apple is a process of dumpster-diving the mailing lists scanning for the smallest hint of interest.

With that in mind, I completely understand Alex’s frustration with my post on “regressive” web apps. Although I intended it as a push towards making better progressive web apps, I can see how it could be taken as confirmation by those who think that progressive web apps aren’t worth investing in. Apple, for example. As it is, they’ll have to be carried kicking and screaming into adding support for Service Workers, manifest files, and other building blocks. From the reaction to my post from at least one WebKit developer on Twitter, not only did I fail to get across just how important the technologies behind progressive web apps are, I may have done more harm than good, giving ammunition to sceptics.

Still, I hope that most people took my words in the right spirit, like Addy:

We should push them to do much better. I’ll file bugs. Per @adactio post, can’t forget the ‘Progressive’ part of PWAs

Seeing that reaction makes me feel good …but seeing Alex’s reaction makes me feel bad. Very bad. I’m genuinely sorry that I made Alex feel that way. It wasn’t my intention but, well …good intentions are not enough.

I’ve been looking back at what I wrote, trying to see it through Alex’s eyes, looking for the parts that could be taken as a personal attack:

Chrome developers have decided that displaying URLs is not “best practice” … To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief. … Withholding the “add to home screen” prompt like that has a whiff of blackmail about it. … This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Some pretty strong words there. I stand by them, but the tone is definitely strident.

When we criticise something—a piece of software, a book, a website, a film, a piece of music—it’s all too easy to forget that there are real people behind it. But that isn’t the case here. I know that there are real people working on Chrome, because I know quite a few of those people. I also know that their intentions are good. That’s not a reason for me to remain silent—that’s a reason for me to speak up.

If I had known that my post was going to upset Alex, would I have still written it? That’s a tough one. On the one hand, this is a topic I care passionately about. I think it’s vital that we don’t compromise on the very things that make the web great. On the other hand, who knows if what I wrote will make the slightest bit of difference? In which case, I got the catharsis of getting it off my chest but at the price of upsetting somebody I respect. That price feels too high.

I love the fact that I can publish whatever I want on my own website. It can be a place for me to be enthusiastic about things that excite me, and a place for me to rant about things that upset me. I estimate that the enthusiastic stuff outnumbers the ranty stuff by about ten to one, but negativity casts a disproportionately large shadow.

I need to get better at tempering my words. Not that I’m going to stop criticising bad decisions when I see them, but I need to make my intentions clearer …because just having good intentions is not enough. Throughout this post, I’ve mentioned repeatedly how much I respect the people I know working on the Chrome team. I should have said that in my original post.

Regressive Web Apps

There were plenty of talks about building for the web at this year’s Google I/O event. That makes a nice change from previous years when the web barely got a look in and you’d be forgiven for thinking that Google I/O was an event for Android app developers.

This year’s event showed just how big Google is, and how it doesn’t have one party line when it comes to the web and native. At the same time as there were talks on Service Workers and performance for the web, there was also an unveiling of Android Instant Apps—a full-frontal assault on the web. If you thought it was annoying when websites door-slammed you with intrusive prompts to install their app, just wait until they don’t need to ask you anymore.

Peter has looked a bit closer at Android Instant Apps and I think he’s as puzzled as I am. Either they are sandboxed to have similar permission models to the web (in which case, why not just use the web?) or they allow more access to native APIs in which case they’re a security nightmare waiting to happen. I’m guessing it’s probably the former.

Meanwhile, a different part of Google is fighting the web’s corner. The buzzword du jour is Progressive Web Apps, originally defined by Alex as:

  • Responsive
  • Connectivity independent
  • App-like-interactions
  • Fresh
  • Safe
  • Discoverable
  • Re-engageable
  • Installable
  • Linkable

A lot of those points are shared by good native apps, but the first and last points in that list are key features of the web: being responsive and linkable.

Alas many of the current examples of so-called Progressive Web Apps are anything but. Flipkart and The Washington Post have made Progressive Web Apps that are getting lots of good press from Google, but are mobile-only.

Looking at most of the examples of Progressive Web Apps, there’s an even more worrying trend than the return to m-dot subdomains. It looks like most of them are concentrating so hard on the “app” part that they’re forgetting about the “web” bit. That means they’re assuming that modern JavaScript is available everywhere.

Alex pointed to shop.polymer-project.org as an example of a Progressive Web App that is responsive as well as being performant and resilient to network failures. It also requires JavaScript (specifically the Polymer polyfill for web components) to render some text and images in a browser. If you’re using the “wrong” browser—like, say, Opera Mini—you get nothing. That’s not progressive. That’s the opposite of progressive. The end result may feel very “app-like” if you’re using an approved browser, but throwing the users of other web browsers under the bus is the very antithesis of what makes the web great. What does it profit a website to gain app-like features if it loses its soul?

I’m getting very concerned that the success criterion for Progressive Web Apps is changing from “best practices on the web” to “feels like native.” That certainly seems to be how many of the current crop of Progressive Web Apps are approaching the architecture of their sites. I think that’s why the app-shell model is the one that so many people are settling on.

Personally, I’m not a fan of the app-shell model. I feel that it prioritises exactly the wrong stuff—the interface is rendered quickly while the content has to wait. It feels weirdly like a hangover from Appcache. I also notice it being used as a get-out-of-jail-free card, much like the ol’ “Single Page App” descriptor; “Ah, I can’t do progressive enhancement because I’m building an app shell/SPA, you see.”

But whatever. That’s just, like, my opinion, man. Other people can build their app-shelled SPAs and meanwhile I’m free to build websites that work everywhere, and still get to use all the great technologies that power Progressive Web Apps. That’s one of the reasons why I’ve been quite excited about them—all the technologies and methodologies they promote match perfectly with my progressive enhancement approach: responsive design, Service Workers, good performance, and all that good stuff.

I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content, and don’t throw away responsiveness in favour of a return to device-specific silos. But I’m not holding my breath. People seem to be so caught up in the attempt to get native-like functionality that they’re willing to give up the very things that make the web great.

For example, I’ve seen people use a meta viewport declaration to disable pinch-zooming on their sites. As justification they point to the fact that you can’t pinch-zoom in most native apps, therefore this web-based app should also prohibit that action. The inability to pinch-zoom in native apps is a bug. By also removing that functionality from web products, people are reproducing unnecessary bugs. It feels like a cargo-cult approach to building for the web: slavishly copy whatever native is doing …because everyone knows that native apps are superior to websites, right?

Here’s another example of the cargo-cult imitation of native. In your manifest JSON file, you can declare a display property. You can set it to browser, standalone, or fullscreen. If you set it to standalone or fullscreen then, when the site is launched from the home screen, it won’t display the address bar. If you set the display property to browser, the address bar will be visible on launch. Now, personally I like to expose those kind of seams:

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

Other people disagree. They think it makes more sense to hide the URL. They have a genuine concern that users will be confused by launching a website from the home screen in a browser (presumably because the user’s particular form of amnesia caused them to forget how that icon ended up on their home screen in the first place).

Fair enough. We’ll agree to differ. They can set their display property how they want, and I can set my display property how I want. It’s a big web after all. There’s no one right or wrong way to do this. That’s why there are multiple options for the values.

Or, at least, that was the situation until recently…

Remember when I wrote about how Chrome on Android will show an “add to home screen” prompt if your Progressive Web App fulfils a few criteria?

  • It is served over HTTPS,
  • it has a manifest JSON file,
  • it has a Service Worker, and
  • the user visits it a few times.

Well, those goalposts have moved. There is now a new criterion:

  • Your manifest file must not contain a display value of browser.

Chrome developers have decided that displaying URLs is not “best practice”. It was filed as a bug.

A bug.

Displaying URLs.

A bug.

I’m somewhat flabbergasted by this. The killer feature of the web—URLs—are being treated as something undesirable because they aren’t part of native apps. That’s not a failure of the web; that’s a failure of native apps.

Now, don’t get me wrong. I’m not saying that everyone should be setting their display property to browser. That would be far too prescriptive. I’m saying that it should be a choice. It should depend on the website. It should depend on the expectations of the users of that particular website. To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief.

I wouldn’t even have noticed this change of policy if it weren’t for the newly-released Lighthouse tool for testing Progressive Web Apps. The Session gets a good score but under “Best Practices” there was a red mark against the site for having display: browser. Turns out that’s the official party line from Chrome.

Just to clarify: you can have a site that has literally no HTML or turns away entire classes of devices, yet officially follows “best practices” and gets rewarded with an “add to home screen” prompt. But if you have a blazingly fast responsive site that works offline, you get nothing simply because you don’t want to hide URLs from your users:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Stuart argues that this is a paternal decision:

The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar.

I think there’s something to that, but digging deeper, developers and designers don’t make decisions like that in isolation. They’re generally thinking about what’s best for users. So, yes, absolutely, different apps will have different display properties, but that shouldn’t be down to the belief system of the developer; it should be down to the needs of the users …the specific needs of the specific users of that specific app. For the Chrome team to come down on one side or the other and arbitrarily declare that one decision is “correct” for every single Progressive Web App that is ever going to be built …that’s a political decision. It kinda feels like an abuse of power to me. Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

The other factors that contribute to the “add to home screen” prompt are pretty uncontroversial:

  • Sites should be served over a secure connection: that’s pretty hard to argue with.
  • Sites should be resilient to network outages: I don’t think anyone is going to say that’s a bad idea.
  • Sites should provide some metadata in manifest file: okay, sure, it’s certainly not harmful.
  • Sites should obscure their URL …whoa! That feels like a very, very different requirement, one that imposes one particular opinion onto everyone who wants to participate.

This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Up until now I’ve been a big fan of Progressive Web Apps. I understood them to be combining the best of the web (responsiveness, linkability) with the best of native (installable, connectivity independent). Now I see that balance shifting towards the native end of the scale at the expense of the web’s best features. I’d love to see that balance restored with a little less emphasis on the “Apps” and a little more emphasis on the “Web.” Now that would be progressive.


John Gruber quite rightly skewers a paywall-proteced “sky is falling” piece in the Wall Street Journal called The Web Is Dying; Apps Are Killing It, writing that native apps are part of the web:

They’re just superior clients to open Internet services.

This is something I wrote about earlier this year:

There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

In that same piece, I try to define exactly what the web is:

Well, the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP.

John also gives a defintion of what the web is:

There are two big four-letter “H” acronyms that powered the web from the beginning: HTML (client), and HTTP (networking protocol). Native apps are just an alternative to HTML running in a web browser (and many native apps still use HTML web views embedded within the apps themselves to render parts of their interface). Almost all native apps use HTTP/S for networking, though.

Notice the difference? Whereas John talks about two things that define the web (HTTP/S and HTML), I talk about three: HTTP(S), HTML, and URLs:

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

URLs are what give the web is its reach, and that’s what’s still missing from native apps.

But John’s fundamental point that native apps and the web are not fundumentally opposed? I completely agree with that. They are complementary. Irakli Nadareishvili wrote about this false dichotomy recently in a post called Responsive Web Design or Native Mobile Apps?:

Native mobile applications are not going anywhere and the future of all websites is to be responsive. These two assertions are not mutually exclusive, they are complementary – don’t create apps when what you actually need is a website; but also don’t pretend webapps can completely replace native applications, because they can’t.

It’s also worth remembering that even if you’re using a native app—like, say, Facebook or Twitter—you’re still going to spend a lot of time following links and reading stuff that’s rendered in the app, but that lives out on the world wide web. And the reason why those apps can access those resources is because those resources have URLs.

URLs are not an implementation detail. The URI is the thing.


You can listen to an audio version of Seams.

“The function of science fiction,” said Ray Bradbury, “is not only to predict the future, but to prevent it.”

Dystopias are the default setting for science fiction. It’s rare to find utopian sci-fi, and when you do—as in the post-singularity Culture novels of Iain M.Banks—there’s always more than a germ of dystopia; the dystutopias that Margaret Atwood speaks of.

You’ve got your political dystopias—1984 and all its imitators. Then there’s alien invasion dystopias, machine-intelligence dystopias, and a whole slew of post-apocalyptic dystopias: nuclear war, pandemic disease, environmental collapse, genetic engineering …take your pick. From the cosy catastrophes of John Wyndham to Cormac McCarthy’s The Road, this is the stock and trade of speculative fiction.

Of all these undesirable futures, one that troubles more than any other is the Wall·E dystopia. I’m not talking about the environmental wasteland depicted on Earth. I mean the dystutopia depicted aboard the generation starship The Axiom. Here, humanity’s every need is catered to without requiring any thought. And so humanity atrophies, becoming physically obese and intellectually lazy.

It’s not a new idea. H. G. Wells had already shown us a distant future like this in his classic novel The Time Machine. In the far future of that book’s timeline, humanity splits into two. The savagery of the canabalistic Morlocks is contrasted with the docile passive stupidity of the Eloi, but as Jaron Lanier points out, both endpoints are equally horrific.

In Wall·E, the Eloi have advanced technology. Their technology has been designed according to a design principle enshrined in the title of a Dead Kennedys album: Give Me Convenience Or Give Me Death.

That’s the reason why the Wall·E dystopia disturbs me so much. It’s all-too believable. For many years now, the rallying cry of digital designers has been epitomised by the title of Steve Krug’s terrific book, Don’t Make Me Think. But what happens when that rallying cry is taken too far? What happens when it stops being “don’t make think while I’m trying to complete a task” to simply “don’t make me think” full stop?

Convenience. Ease of use. Seamlessness.

On the face of it, these all seem like desirable traits in digital and physical products alike. But they come at a price. When we design, we try to do the work so that the user doesn’t have to. We do the thinking so the user doesn’t have to. Don’t make the user think. But taken too far, that mindset becomes dangerous.

Marshall McLuhan said that every extension is also an amputution. As we augment the abilities of people to accomplish their tasks, we should be careful not to needlessly curtail what they can do:

Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by doing so they have a tendency to restrict our options for accomplishing that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

I see this a lot in the world of web devlopment. We’re constantly faced with challenges like dealing with users on slow networks or small screens. So we try to come up with solutions (bandwidth media queries, responsive images) that have at their heart an assumption that we know better than the end user what they should get.

I’m not saying that everything should be an option in a menu for the user to figure out—picking smart defaults is very much part of our job. But I do think there’s real value in giving the user the final choice.

I remember Jake giving a good example of this. If he’s travelling and he’s on a 3G network on his phone, or using shitty hotel WiFi on his laptop, and someone sends him a link to a video of some cats, he doesn’t mind if he gets the low-quality version as long as he gets to see the feline shenanigans in short order. But if he’s in the same situation and someone sends him a link to the just-released trailer for the new Star Trek movie, he’s willing to wait for hours so that he can watch in high-definition.

That’s a choice. All too often, these kind of choices are pre-made by designers and developers instead of being offered to the end user. We probably mean well, but there’s a real danger in assuming that just because someone is using a particular device that we can infer what their context is:

Mind reading is no way to base fundamental content decisions.

My point is that while we don’t want to overwhelm the user with choice overload, we also need to be careful not to unintentionally remove valuable choices that can empower people. In our quest to make experiences seamless, we run the risk of also making those experiences rigid and inflexible.

The drive for a “seamless experience” has been used to justify some harsh amputations. When Twitter declared war on the very developers it used to champion, and changed its API and terms of service so that tweets had to be displayed the same way everywhere, it was done in the name of “a consistent user experience.” Twitter knows best.

The web is made up of parts and there are seams between those parts: HTML, HTTP, and URLs. The software that can expose or hide those seams is the web browser. Web browsers are made by human beings and it’s the mindset and assumptions of those human beings that determines whether web browsers are enabling or disabling users to make use of those seams.

“View source” is a seam that exposes the HTML lying beneath every web page. That kind of X-ray vision can be quite powerful. Clearly it’s not an important feature for most users, but it is directly responsible for showing people how web pages are made …and intimating that anyone can do it. In the introduction to my first book I thanked “view source” along with my other teachers like Jeff Veen, Steve Champeon, and Jeffrey Zeldman.

These days, browsers don’t like to expose “view source” as easily as they once did. It’s hidden amongst the developer tools. There’s an assumption there that it’s not intended for regular users. The browser makers know best.

There are seams between the technologies that make up a web page: HTML, CSS, and JavaScript. The ability to enable or disable those layers can be empowering. It has become harder and harder to disable JavaScript in the browser. Another little amputation. The browser makers know best.

The CSS that styles web pages can be over-ridden by the end user. This is not a bug. It is a very powerful feature. That feature is being removed:

I understand that vendors can do whatever they want to control how you experience the web, because it is their software, their product, but removing user stylesheets feels sooo un-web to me, which is irony. A browser’s largest responsibility is to give people access to the web. It’s like the web is this open hand, but software is this closed fist.

Then there’s the URL. The ultimate seam.

Historically, browsers have exposed this seam, but now—just as with “view source” and user stylesheets—the visibility of the URL is being relegated to being a power-user tool.

The ultimate amputation.

The irony here is that the justification for this change is not the usual mantra of providing “a more seamless user experience.” Instead, the justification is supposedly security.

This strike me as really strange. Security is the one area where seamlessness is definitely not a desirable characteristic. A secure system requires people to be mindful and aware of their situation. This is certainly true on the web, as Tom points out:

Hiding information away makes me less able to make decisions: it makes me a less informed user.

The whole reason that phishing is a problem is because users don’t pay any bloody attention to what they see in their location bar. Putting less information in the location bar makes the location bar less useful and thus there’s less point paying any attention to it.

Tom has hit on the fundamental mismatch here. Chrome is a piece of software that wants to provide a good user experience—“don’t make me think!”—while at the same trying to make users mindful of their surroundings:

Security requires educated, pro-active, informed thinking users.

Usability is about making the whole process of using the web seamless and thoughtless: a child should be able to do it.

So from the security standpoint, obfuscating the URL is exactly the wrong thing to do.

In order to actually stay safe online, you need to see the “seams” of the web, you need to pay attention, use your brain.

Chrome knows best.

Making it harder to “view source” might seem like an inconsequentail decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Make no mistake, all software is political. We talk about opinionated software but really, all software is opinionated, whether we like it or not. Seemingly inconsequential interface decisions are actually reflections of assumptions, biases and beliefs.

As Nat points out, like all political decisions, this is about power:

There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

If that’s the case, then it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

Welcome aboard The Axiom.

Buy n Large knows best.

URLy warning

I’m genuinely shocked that Jake thinks that Chrome hiding URLs is a good thing. On the one hand, he says:

The URL is the share button of the web, and it does that better than any other platform. Linkability and shareability is key to the web, we must never lose that…

I absolutely agree with him there. But I very much disagree when he says:

…and these changes do not lose that.

The method he describes for getting at a URL to share is this:

clicking the origin chip or hitting ⌘-L.

Your average user is no more likely to figure out how to do that then they are to figure out how to view source (something that Chrome buried as a “developer” feature some time ago).

Cennydd recently said of URLs:

I mostly agree with him. The protocol portion of the URL is pretty pointless, and the domain name and TLD are never what I would describe as “beautiful”. No, when I talk about beautiful URLs, I mean the path that comes after the protocol, domain name, and TLD gumpf …the very bit that Chrome is looking to hide.

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.

URLs are for humans. Design them for humans.

Of course your average user probably won’t even know what a URL is, and nor should they. But they know what a link is. They know that, until now, they could copy the “link” from the top of their browser and paste it into an email, or a text message, or a word processing document.

If this Chrome experiment goes forward, we can kiss all that goodbye.

The security issue that Jake outlines is that browsers need to make the domain name portion of the URL clearly visible. I hope that the smart folks working on Chrome can figure out a way to do that without castrating the browser’s ability to easily share links.

It’s a classic case of:

  1. Something must be done!
  2. This (killing URLs) is something.
  3. Something has been done.

Technically, obfuscating the URL seems to solve the security issue. But technically, decapitation seems to solve a headache.

Scrollin’, scrollin’, scrollin’

A few weeks ago, when I changed how the front page of this site works, I wrote about “streams”, infinite scrolling, and the back button:

Anyway, you’ll notice that the new home page of adactio.com is still using pagination. That’s related to another issue and I suspect that this is the same reason that we haven’t seen search engines like Google introduce stream-like behaviour instead of pagination for search results: what happens when you’ve left a stream but you use the browser’s back button to return to it?

In all likelihood you won’t be returned to the same spot in the stream that were in before. Instead you’re more likely to be dumped back at the default list view (the first ten or twenty items).

That’s exactly what Kyle Kneath is trying to solve with this nifty experiment with infinite scroll and the HTML5 History API. I should investigate this further. Although, like I said in my post, I would probably replace automatic infinite scrolling with an explicit user-initiated action:

Interpreting one action by a user—scrolling down the screen—as implicit permission to carry out another action—load more items—is a dangerous assumption.

Kyle’s code is available from GitHub (of course). As written, it relies on some library support—like jQuery—but with a little bit of tweaking, I’m sure it could be rewritten to remove any dependencies (hint, hint, lazy web).

URIslands in the stream

For over a decade the home page of this website has effectively been a splash screen.

Well …not exactly. I mean, it’s not as if it contained an animation and a “skip intro” link. But it has been very minimalist: a brief one-sentence explanation of what this website is and a brief one-sentence description of the latest post I’ve published.

Most blogs are standalone sites—i.e. the blog isn’t part of a larger site—so the home page and the latest blog posts are one and the same. My site is a bit different. The blog part—my journal—is just one piece. There’s also the links section and the articles section. That raises an interesting question: exactly what should the homepage contain?

I’m a great believer in well-designed URLs but oftentimes the home page is something of an exception (one of the reasons I advocate designing the home page last). The URL / doesn’t tell you anything about the resource. There’s basically two different ways to go: keep it really, really minimal (like I’ve been doing for ten years) or make it a patchwork containing a little bit of everything (the way that most news websites work).

For the past few months I’ve been contemplating making the switch from the minimalist to the maximalist approach for the home page of this site. On the one hand, I think it’s RESTfully correct to have the URL adactio.com/ return a terse description of what the site is, with links to further resources. On the other hand …it’s a splash screen.

After much deliberation, I’ve decided to flip the switch.

It’s a bit of a shame. I quite enjoyed being one of the last people I know to have a quirky intro page. But the new home page alleviates a concern I’ve had for a while. I get the feeling that a lot of people have only been paying attention to what’s written in my journal without realising that I post multiple times a day to the links section. The new homepage shows everything—journal entries, links, and articles—grouped by date. It is, if you like, a stream.

It’s exactly the kind of stream that Anil Dash writes about in Stop Publishing Web Pages:

Most users on the web spend most of their time in apps. The most popular of those apps, like Facebook, Twitter, Gmail, Tumblr and others, are primarily focused on a single, simple stream that offers a river of news which users can easily scroll through, skim over, and click on to read in more depth.

Glossing over the lack of definition for “app”, there’s a good point here. The “stream” idea makes a lot of sense …in the right situation. That situation is the list view. If you think about the situations where the never-ending stream has been employed—Twitter, Facebook, Instagram, Pinterest—they are views of lists, usually reverse-chronologically ordered lists.

Going back to URL design, these kinds of list views are the ones that often have a single URL filtered with a query string. You’re much more likely to see /newest?page=2 or /popular?start=10 than you are to see /newest/page2 or /popular/10-20. There’s a good reason for that. While the kind of resource located at the URL remains unchanged—a list of items—the specifics are likely to change every day or hour or even minute—which items are in the list.

So traditionally list views have been paginated using query strings. The streams that Anil is talking about are an alternate way of navigating list views that does away with pagination and query strings. I think that this way of navigating a list view can work well but, as always, the devil is in the details.

First of all, there’s the issue of when to append to the stream. This could be triggered by the user with a link—“load more”—or you could assume that when the user gets to the current end of the list that they’ll automatically want to load more …the dreaded infinite scroll.

As Frank put it:

Quite apart from any psychological implications, there’s a usability issue here. Interpreting one action by a user—scrolling down the screen—as implicit permission to carry out another action—load more items—is a dangerous assumption. It’s similar to the tyranny of mouseover:

If I click on a link, I am initiating an action. If I fill in a form and press a submit button, I am initiating an action. But if I move my mouse over a page element, I am not initiating an action.

Oh, and if your site has footer, please do not use infinite scroll. Think about it.

In case you hadn’t guessed, I’m a big proponent of allowing the user to explicitly request more items to be appended to a stream rather than using the infinite-scroll pattern.

That said, you could introduce a nice compromise. What if, when the user scrolls down the screen, you begin pre-fetching the next items in the list? That way, when the user explicitly requests more items they’ll load lickety-split.

Anyway, you’ll notice that the new home page of adactio.com is still using pagination. That’s related to another issue and I suspect that this is the same reason that we haven’t seen search engines like Google introduce stream-like behaviour instead of pagination for search results: what happens when you’ve left a stream but you use the browser’s back button to return to it?

In all likelihood you won’t be returned to the same spot in the stream that were in before. Instead you’re more likely to be dumped back at the default list view (the first ten or twenty items).

If your stream is self-contained—like Instagram or Pinterest—then there’s no problem. Twitter attempts to get around the problem by showing you the linked content inline where possible (with some judicious use of oEmbed) and by opening external links in a new window or tab—not so much the cure that kills the patient as the cure that ignores the problem.

In my case, my home page stream is crammed full of hyperlinks. Until there’s a way to make unpaginated streams work nicely with the back button, I’ll stick with pagination.

Anyway, I hope you like the new home page. If you do, you may want to subscribe to the RSS equivalent of the combined stream of my journal, my links, and my articles.

Of Time and the Network and the Long Bet

When I went to Webstock, I prepared a new presentation called Of Time And The Network:

Our perception and measurement of time has changed as our civilisation has evolved. That change has been driven by networks, from trade routes to the internet.

I was pretty happy with how it turned out. It was a 40 minute talk that was pretty evenly split between the past and the future. The first 20 minutes spanned from 5,000 years ago to the present day. The second 20 minutes looked towards the future, first in years, then decades, and eventually in millennia. I was channeling my inner James Burke for the first half and my inner Jason Scott for the second half, when I went off on a digital preservation rant.

You can watch the video and I had the talk transcribed so you can read the whole thing.

It’s also on Huffduffer, if you’d rather listen to it.

Adactio: Articles—Of Time And The Network on Huffduffer

Webstock: Jeremy Keith

During the talk, I pointed to my prediction on the Long Bets site:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd last year (a terrible day for New Zealand). The prediction will reach fruition on 02022-02-22 …I quite like the alliteration of that date.

Here’s how I justified the prediction:

“Cool URIs don’t change” wrote Tim Berners-Lee in 01999, but link rot is the entropy of the web. The probability of a web document surviving in its original location decreases greatly over time. I suspect that even a relatively short time period (eleven years) is too long for a resource to survive.

Well, during his excellent Webstock talk Matt announced that he would accept the challenge. He writes:

Though much of the web is ephemeral in nature, now that we have surpassed the 20 year mark since the web was created and gone through several booms and busts, technology and strategies have matured to the point where keeping a site going with a stable URI system is within reach of anyone with moderate technological knowledge.

The prediction has now officially been added to the list of bets.

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

The sysadmin for the Long Bets site is watching this bet with great interest. I am, of course, treating this bet in much the same way that Paul Gilster is treating this optimistic prediction about interstellar travel: I would love to be proved wrong.

The detailed terms of the bet have been set as follows:

On February 22nd, 2022 from 00:01 UTC until 23:59 UTC,
entering the characters http://www.longbets.org/601 into the address bar of a web browser or command line tool (like curl)
using a web browser to follow a hyperlink that points to http://www.longbets.org/601
return an HTML document that still contains the following text:
“The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.”

The suspense is killing me!

A matter of protocol

The web is made of sugar, spice and all things nice. On closer inspection, this is what most URLs on the web are made of:

  1. The protocol—e.g. http—followed by a colon and two slashes (for which Sir Tim apologises).
  2. The domain—e.g. adactio.com or huffduffer.com.
  3. The path—e.g. /journal/tags/nerdiness or /js/global.js.

(I’m leaving out the whole messy business of port numbers—which can be appended to the domain with a colon—because just about everything on the web is served over the default port 80.)

Most URLs on the web are either written in full as absolute URLs:

a href="http://adactio.com/journal/tags/nerdiness"
script src="https://huffduffer.com/js/global.js"

Or else they’re written out relative to the domain, like this:

a href="/journal/tags/nerdiness"
script src="/js/global.js"

It turns out that URLs can not only be written relative to the linking document’s domain, but they can also be written relative to the linking document’s protocol:

a href="//adactio.com/journal/tags/nerdiness"
script src="//huffduffer.com/js/global.js"

If the linking document is being served over HTTP, then those URLs will point to http://adactio.com/journal/tags/nerdiness and https://huffduffer.com/js/global.js but if the linking document is being served over HTTP Secure, the URLs resolve to https://adactio.com/journal/tags/nerdiness and https://huffduffer.com/js/global.js.

Writing the src attribute relative to the linking document’s protocol is something that Remy is already doing with his :

<!--[if lt IE 9]>
<script src="//html5shim.googlecode.com/svn/trunk/html5.js"></script>

If you have a site that is served over both and , and you’re linking to a -hosted JavaScript library—something I highly recommend—then you should probably get in the habit of writing protocol-relative URLs:

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js">

This is something that HTML5 Boilerplate does by default. HTML5 Boilerplate really is a great collection of fantastically useful tips and tricks …all wrapped in a terrible, terrible name.


Hashbangs. Yes, again. This is important, dammit!

When the topic first surfaced, prompted by Mike’s post on the subject, there was a lot of discussion. For a great impartial round-up, I highly recommend two posts by James Aylett:

There seems to be a general concensus that hashbang URLs are bad. Even those defending the practice portray them as a necessary evil. That is, once a better solution is viable—like the HTML5 History API—then there will no longer be any need for #! in URLs. I’m certain that it’s a matter of when, not if Twitter switches over.

But even then, that won’t be the end of the story.

Dan Webb has written a superb long-zoom view on the danger that hashbangs pose to the web:

There’s no such thing as a temporary fix when it comes to URLs. If you introduce a change to your URL scheme you are stuck with it for the forseeable future. You may internally change your links to fit your new URL scheme but you have no control over the rest of the web that links to your content.

Therein lies the rub. Even if—nay when—Twitter switch over to proper URLs, there will still be many, many blog posts and other documents linking to individual tweets …and each of those links will contain #!. That means that Twitter must make sure that their home page maintains a client-side routing mechanism for inbound hashbang links (remember, the server sees nothing after the # character—the only way to maintain these redirects is with JavaScript).

As Paul put it in such a wonderfully pictorial way, the web is agreement. Hacks like hashbang URLs—and URL shorteners—weaken that agreement.

The long prep

The secret to a good war movie is not in the depiction of battle, but in the depiction of the preparation for battle. Whether the fight will be for Agincourt, Rourke’s Drift, Helm’s Deep or Hoth, it’s the build-up that draws you in and makes you care about the outcome of the upcoming struggle.

That’s what 2011 has felt like for me so far. I’m about to embark on a series of presentations and workshops in far-flung locations, and I’ve spent the first seven weeks of the year donning my armour and sharpening my rhetorical sword (so to speak). I’ll be talking about HTML5, responsive design, cultural preservation and one web; subjects that are firmly connected in my mind.

It all kicks off in Belgium. I’ll be taking a train that will go under the sea to get me to Ghent, location of the Phare conference. There I’ll be giving a talk called All Our Yesterdays.

This will be non-technical talk, and I’ve been given carte blanche to get as high-falutin’ and pretentious as I like …though I don’t think it’ll be on quite the same level as my magnum opus from dConstruct 2008, The System Of The World.

Having spent the past month researching and preparing this talk, I’m looking forward to delivering it to a captive audience. I submitted the talk for consideration to South by Southwest also, but it was rejected so the presentation in Ghent will be a one-off. The SXSW rejection may have been because I didn’t whore myself out on Twitter asking for votes, or it may have been because I didn’t title the talk All Our Yesterdays: Ten Ways to Market Your Social Media App Through Digital Preservation.

Talking about the digital memory hole and the fragility of URLs is a permanently-relevant topic, but it seems particularly pertinent given the recent moves by the BBC. But I don’t want to just focus on what’s happening right now—I want to offer a long-zoom perspective on the web’s potential as a long-term storage medium.

To that end, I’ve put my money where my mouth is—$50 worth so far—and placed the following prediction on the Long Bets website:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

If you have faith in the Long Now foundation’s commitment to its URLs, you can challenge my prediction. We shall then agree the terms of the bet. Then, on February 22nd 2022, the charity nominated by the winner will receive the winnings. The minimum bet is $200.

If I win, it will be a pyrrhic victory, confirming my pessimistic assessment.

If I lose, my faith in the potential longevity of URLs will be somewhat restored.

Depending on whether you see the glass as half full or half empty, this means I’m either entering a win/win or lose/lose situation.

Care to place a wager?

Going Postel

I wrote a little while back about my feelings on hash-bang URLs:

I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness…

Fortunately, Mike Davies is more articulate than I. He’s written a detailed account of breaking the web with hash-bangs.

It would appear that hash-bang usage is on the rise, despite the fact that it was never intended as a long-term solution. Instead, the pattern (or anti-pattern) was intended as a last resort for crawling Ajax-obfuscated content:

So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

Mike goes into detail on the Gawker outage that was a direct result of its “sites” being little more than single pages that require JavaScript to access anything.

I’m always surprised when I come across as site that deliberately chooses to make its content harder to access.

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of .

This is no accident. The web stack is rooted in . If you serve an HTML document to a browser, and that document contains some tags or attributes that the browser doesn’t understand, the browser will simply ignore them and render the document as best it can. If you supply a style sheet that contains a selector or rule that the browser doesn’t recognise, it will simply pass it over and continue rendering.

In fact, the most brittle part of the stack is JavaScript. While it’s far looser and more forgiving than many other programming languages, it’s still a programming language and that means that a simple typo could potentially cause an entire script to fail in a browser.

That’s why I’m so surprised that any front-end engineer would knowingly choose to swap out a solid declarative foundation like HTML for a more brittle scripting language. Or, as Simon put it:

Gizmodo launches redesign, is no longer a website (try visiting with JS disabled): http://gizmodo.com/

Read Mike’s article, re-read this article on URL design and listen to what John Resig has to say in this interview .


Speaking of URLs

We were having a discussion in the Clearleft office recently about that perennially-tricky navigation pivot: tags. Specifically, we were discussing how to represent the interface for combinatorial tags i.e. displaying results of items that have been tagged with tag A and tag B.

I realised that this was functionality that I wasn’t even offering on Huffduffer, so I set to work on implementing it. I decided to dodge the interface question completely by only offering this functionality through the browser address bar. As a fairly niche, power-user feature, I’m not sure it warrants valuable interface real estate—though I may revisit that challenge later.

I can’t use the + symbol as a tag separator because Huffduffer allows spaces in tags (and spaces are converted to pluses in URLs), so I’ve settled on commas instead.

For example, there are plenty of items tagged with “music” (/tags/music) and plenty of items tagged with “science” (/tags/science) but there’s only a handful of items tagged with both “music” and “science” (/tags/music,science).

This being Huffduffer, where just about every page has corresponding JSON, RSS and Atom representations, you can also subscribe to the podcast of everything tagged with both “music” and “science” (/tags/music,science/rss).

There’s an OR operator as well; the vertical pipe symbol. You can view the 60 items tagged with “html5”, the 14 items tagged with “css3”, or the 66 items tagged with either “html5” or “css3” (/tags/html5|css3).

Wait a minute …66 items? But 60 plus 14 equals 74, not 66!

The discrepancy can be explained by the 8 items tagged with both “css3” and “html5” (/tags/html5,css3).

The AND and OR operators can be combined, so you can find items tagged with either “science” or “religion” that are also tagged with “politics” (/tags/science|religion,politics).

While it’s fun to do this in the browser’s address bar, I think the real power is in the way that the corresponding podcast allows you to subscribe to precisely-tailored content. Find just the right combination of tags, click on the RSS link, and you’re basically telling iTunes to automatically download audio whenever there’s something new that matches criteria like:

I’m sure there are plenty of intriguing combinations out there. Now I can use Huffduffer’s URLs to go spelunking for audio gems at the most promising intersections of tags.

The URI is the thing

Here’s what’s on my desk at work: an iMac (with keyboard, mouse and USB cup warmer), some paper, pens, a few books and an A4-sized copy of Paul Downey’s The URI Is The Thing—an intricately-detailed Boschian map of all things RESTful. It’s released under a Creative Commons license, so feel free to download the PDF from archive.org, print it out and keep it on your own desk.

I love good URL design. I found myself nodding vigorously in agreement with just about every point in this great piece on URL design:

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.

That’s why I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness, but it’s related to what Ben said in his comment to Nicholas’s article on how many users have JavaScript disabled:

The truth is that if site content doesn’t load through curl it’s broken.

Or, as Simon put it:

The Web for me is still URLs and HTML. I don’t want a Web which can only be understood by running a JavaScript interpreter against it.

If I copy and paste the URL of that tweet, I get http://twitter.com/#!/simonw/status/25696723761 …which requires a JavaScript interpreter to resolve.

Fortunately, those fragile Twitter URLs will be replaced with proper robust identifiers if this demo by Twitter engineer Ben Cherry is anything to go by. It’s an illustration of saner HTML5 history management using the history.pushState method.

Beautiful hackery

While I had Matthew in my clutches, I made him show me around the API for They Work For You. Who knew that so much could fun be derived from data about MPs?

First off, there’s Matthew’s game of MP Top Trumps, ‘though he had to call it MP Fab Farts to avoid getting a cease and desist letter.

Then there’s a text adventure built on the API. This is so good! Enter your postcode and you find yourself playing the part of your parliamentary representative with zero experience points and one hundred hit points. You must work your way across the country, doing battle with rival MPs, as you make your way towards Sedgefield, the lair of Blair.

You can play a Web version but for some real old-school fun, try the telnet version. This reminded me of how much I used to love text adventures back in the days of 8-bit computers. I even remember trying to write my own in BASIC.

For what it’s worth, Celia Barlow, MP for Hove, has excellent pesteredness points. I made it all the way up to Sedgefield and defeated Tony Blair in battle. My prize was the source code of the adventure game in Python.

Ah, what larks!

There’s another project that Matthew works on that I find extremely useful. He has created accessible UK train timetables using the data from the National Rail site, a scrAPI if you will. This is where I go whenever I need to plan a train journey.

The latest feature is something that warms the cockles of my heart: beautiful, hackable URLs. If I want a list of trains going from Brighton to London, I can just type:


It handles spaces (or pluses or underscores) too:

http://traintimes.org.uk/brighton/london victoria

The URL can also be extended with a departure time:

http://traintimes.org.uk/brighton/london victoria/14:00

My address bar is my command line. This is the kind of design that makes URL fetishists like Tom very happy.