Tags: bus



Progressive web app store

Remember when Chrome developers decided to remove the “add to home screen” prompt for progressive web apps that used display: browser in their manifest files? I wasn’t happy.

Alex wrote about their plans to offer URL access for all installed progressive web apps, regardless of what’s in the manifest file. I look forward to that. In the meantime, it makes no sense to punish the developers who want to give users access to URLs.

Alex has acknowledged the cart-before-horse-putting, and written a follow-up post called PWA Discovery: You Ain’t Seen Nothin Yet:

The browser’s goal is clear: create a hurdle tall enough that only sites that meet user expectations of “appyness” will be prompted for. Maybe Chrome’s version of this isn’t great! Feedback like Ada’s, Andrew’s, and Jeremy’s is helpful is letting us know how to improve. Thankfully, in most of the cases flagged so far, we’ve anticipated the concerns but maybe haven’t communicated our thinking as well as we should have. This is entirely my fault. This post is my penance.

It turns out that the home-screen prompt was just the first stab. There’s a really interesting idea Alex talks about called “ambient badging”:

Wouldn’t it be great if there were a button in the URL bar that appeared whenever you landed on a PWA that you could always tap to save it to your homescreen? A button that showed up in the top-level UI only when on a PWA? Something that didn’t require digging through menus and guessing about “is this thing going to work well when launched from the homescreen?”

I really, really like this idea. It kind of reminds me of when browsers would flag up whether or not a website had an RSS feed, and allow you to subscribe right then and there.

Hold that thought. Because if you remember the history of RSS, it ended up thriving and withering based on the fortunes of one single RSS reader.

Whenever the discoverability of progressive web apps comes up, the notion of an app store for the web is inevitably floated. Someone raised it as a question at one of the Google I/O panels: shouldn’t Google provide some kind of app store for progressive web apps? …to which Jake cheekily answered that yes, Google should create some kind of engine that would allow people to search for these web apps.

He’s got a point. Progressive web apps live on the web, so any existing discovery method on the web will work just fine. Remy came to a similar conclusion:

Progressive web apps allow users to truly “visit our URL to install our app”.

Also, I find it kind of odd that people think that it needs to be a company the size of Google that would need to build any kind of progressive web app store. It’s the web! Anybody can build whatever they want, without asking anyone else for permission.

So if you’re the entrepreneurial type, and you’re looking for the next Big Idea to make a startup out of, I’ve got one for you:

Build a directory of progressive web apps.

Call it a store if you want. Or a marketplace. Heck, you could even call it a portal, because, let’s face it, that’s kind of what app stores are.

Opera have already built you a prototype. It’s basic but it already has a bit of categorisation. As progressive web apps get more common though, what we’re really going to need is curation. Again, there’s no reason to wait for somebody else—Google, Opera, whoever—to build this.

Oh, I guess I should provide a business model too. Hmmm …let me think. Advertising masquerading as “featured apps”? I dunno—I haven’t really thought this through.

Anyway, you might be thinking, what will happen if someone beats you to it? Well, so what? People will come to your progressive web app directory because of your curation. It’s actually a good thing if they have alternatives. We don’t want a repeat of the Google Reader situation.

It’s hard to recall now, but there was a time when there wasn’t one dominant search engine. There’s nothing inevitable about Google “owning” search or Facebook “owning” social networking. In fact, they both came out of an environment of healthy competition, and crucially neither of them were first to market. If that mattered, we’d all still be using Yahoo and Friendster.

So go ahead and build that progressive web app store. I’m serious. It will, of course, need to be a progressive web app itself so that people can install it to their home screens and perhaps even peruse your curated collection when they’re offline. I could imagine that people might even end up with multiple progressive web app stores added to their home screens. It might even get out of control after a while. There’d need to be some kind of curation to help people figure out the best directory for them. Which brings me to my next business idea:

Build a directory of directories of progressive web apps…

On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

A map to build by

The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.

It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.

There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.

Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.

I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.

Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.

I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.

I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.

So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?

Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:

How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?

He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.

I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.

Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Getting ahead in advertising

One of the other speakers at this year’s Webstock was Matthew Inman. While he was in Wellington, he published a new Oatmeal comic called I tried to watch Game of Thrones and this is what happened.

I can relate to the frustration he describes. I watched most of Game of Thrones while I was in Arizona over Christmas. I say “most” because the final episode was shown on the same day that Jessica and I were flying back to the UK. Once we got back home, we tried to obtain that final episode by legal means. We failed. And so we torrented it …just as described in Matt’s comic.

Andy Ihnatko posted a rebuttal to the Oatmeal called Heavy Hangs The Bandwidth That Torrents The Crown in which he equates Matt’s sense of entitlement to that described by Louis C.K.:

The single least-attractive attribute of many of the people who download content illegally is their smug sense of entitlement.

As Marco Arment points out, Andy might be right but it’s not a very helpful approach to solving the real problem:

Relying solely on yelling about what’s right isn’t a pragmatic approach for the media industry to take. And it’s not working. It’s unrealistic and naïve to expect everyone to do the “right” thing when the alternative is so much easier, faster, cheaper, and better for so many of them.

The pragmatic approach is to address the demand.

I was reminded of this kind of stubborn insistence in defending the old way of doing things while I was thinking about …advertising.

Have a read of this wonderful anecdote called TV Is Broken which describes the reaction of a young girl thitherto only familiar with on-demand streaming of time-shifted content when she is confronted with the experience of watching “regular” television:

“Did it break?”, she asks. It does sometimes happen at home that Flash or Silverlight implode, interrupt her show, and I have to fix it.

“No. It’s just a commercial.”

“What’s a commercial?”, she asks.

“It is like little shows where they tell you about other shows and toys and snacks.”, I explain.


“Well the TV people think you might like to know about this stuff.”

“This is boring! I want to watch Shrek.”

Andy Ihnatko might argue that the young girl needs to sit there and just take the adverts because, hey, that’s the way things have always worked in the past, dagnabbit. Advertising executives would agree. They would, of course, be completely and utterly wrong. Just because something has worked a certain way in the past doesn’t mean it should work that way in the future. If anything, it is the media companies and advertisers who are the ones debilitated by a sense of self-entitlement.

Advertising has always felt strange on the web. It’s an old-world approach that feels out of place bolted onto our new medium. It is being interpreted as damage and routed around. I’m not just talking about ad-blockers. Services like Instapaper and Readability—and, to a certain extent, RSS before them—are allowing people to circumvent the kind of disgustingly dehumanising advertising documented in Merlin’s Noise to Noise Ratio set of screenshots. Those tools are responding to the customers and readers.

There’s been a lot of talk about advertising in responsive design lately—it was one of the talking points at the recent Responsive Summit in London—and that’s great; it’s a thorny problem that needs to be addressed. But it’s one of those issues where, if you look at it deeply enough, keeping the user’s needs in mind, the inevitable conclusion is that it’s a fundamentally flawed approach to interacting with readers/viewers/users/ugly bags of mostly water.

This isn’t specific to responsive design, of course. Cennydd wrote about the fundamental disconnect between user experience and advertising:

Can UX designers make a difference in the advertising field? Possibly. But I see it as a a quixotic endeavour, swimming against the tide of a value system that frequently causes the disempowerment of the user.

I realise that in pointing out that advertising is fundamentally shit, I’m not being very helpful and I’m not exactly offering much in the way of solutions or alternatives. But I rail against the idea that we need to accept intrusive online advertising just because “that’s the way things have always been.” There are many constructs—advertising, copyright—that we treat as if they are immutable laws of nature when in fact they may be outmoded business concepts more suited to the last century (if they ever really worked at all).

So when I see the new IAB Display Advertising Guidelines which consist of more of the same shit piled higher and deeper, my immediate reaction is:

“This is boring! I want to watch Shrek.”


Hashbangs. Yes, again. This is important, dammit!

When the topic first surfaced, prompted by Mike’s post on the subject, there was a lot of discussion. For a great impartial round-up, I highly recommend two posts by James Aylett:

There seems to be a general concensus that hashbang URLs are bad. Even those defending the practice portray them as a necessary evil. That is, once a better solution is viable—like the HTML5 History API—then there will no longer be any need for #! in URLs. I’m certain that it’s a matter of when, not if Twitter switches over.

But even then, that won’t be the end of the story.

Dan Webb has written a superb long-zoom view on the danger that hashbangs pose to the web:

There’s no such thing as a temporary fix when it comes to URLs. If you introduce a change to your URL scheme you are stuck with it for the forseeable future. You may internally change your links to fit your new URL scheme but you have no control over the rest of the web that links to your content.

Therein lies the rub. Even if—nay when—Twitter switch over to proper URLs, there will still be many, many blog posts and other documents linking to individual tweets …and each of those links will contain #!. That means that Twitter must make sure that their home page maintains a client-side routing mechanism for inbound hashbang links (remember, the server sees nothing after the # character—the only way to maintain these redirects is with JavaScript).

As Paul put it in such a wonderfully pictorial way, the web is agreement. Hacks like hashbang URLs—and URL shorteners—weaken that agreement.


In his talk at the Lift conference last year Kevin Slavin talks about the emergent patterns in , the bots that buy and sell with one another occasionally resulting in . It’s a great, slightly dark talk and I highly recommend you watch the video.

This is the same territory that explored in his book Daemon. The book is (science) fiction but as Suarez explains in his Long Now seminar, the reality is that much of our day to day lives is already governed by algorithms. In fact, the more important the question—e.g. “Will my mortgage be approved?”—the more likely that the decision will not be made by a human being.

Daniel Suarez: Daemon: Bot-mediated Reality on Huffduffer

Kevin Slavin mentions that financial algorithms are operating at such a high rate that the speed of light can make a difference to a company’s fortunes, hence the increase in real-estate prices close to network hubs. Now a new paper entitled Relativistic Statistical Arbitrage by Alexander Wissner-Gross and Cameron Freer has gone one further in mapping out “optimal intermediate locations between trading centers,” based on the Earth’s geometry and the speed of light.

In his novel Accelerando, Charles Stross charts the evolution of both humans and algorithms before, during and after a technological singularity.

The 2020s:

A marginally intelligent voicemail virus masquerading as an IRS auditor has caused havoc throughout America, garnishing an estimated eighty billion dollars in confiscatory tax withholdings into a numbered Swiss bank account. A different virus is busy hijacking people’s bank accounts, sending ten percent of their assets to the previous victim, then mailing itself to everyone in the current mark’s address book: a self-propelled pyramid scheme in action. Oddly, nobody is complaining much.

The 2040s:

High in orbit around Amalthea, complex financial instruments breed and conjugate. Developed for the express purpose of facilitating trade with the alien intelligences believed to have been detected eight years earlier by SETI, they function equally well as fiscal gatekeepers for space colonies.

The 2060s:

The damnfool human species has finally succeeded in making itself obsolete. The proximate cause of its displacement from the pinnacle of creation (or the pinnacle of teleological self-congratulation, depending on your stance on evolutionary biology) is an attack of self-aware corporations. The phrase “smart money” has taken on a whole new meaning, for the collision between international business law and neurocomputing technology has given rise to a whole new family of species—fast-moving corporate carnivores in the Net.

Going Postel

I wrote a little while back about my feelings on hash-bang URLs:

I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness…

Fortunately, Mike Davies is more articulate than I. He’s written a detailed account of breaking the web with hash-bangs.

It would appear that hash-bang usage is on the rise, despite the fact that it was never intended as a long-term solution. Instead, the pattern (or anti-pattern) was intended as a last resort for crawling Ajax-obfuscated content:

So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

Mike goes into detail on the Gawker outage that was a direct result of its “sites” being little more than single pages that require JavaScript to access anything.

I’m always surprised when I come across as site that deliberately chooses to make its content harder to access.

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of .

This is no accident. The web stack is rooted in . If you serve an HTML document to a browser, and that document contains some tags or attributes that the browser doesn’t understand, the browser will simply ignore them and render the document as best it can. If you supply a style sheet that contains a selector or rule that the browser doesn’t recognise, it will simply pass it over and continue rendering.

In fact, the most brittle part of the stack is JavaScript. While it’s far looser and more forgiving than many other programming languages, it’s still a programming language and that means that a simple typo could potentially cause an entire script to fail in a browser.

That’s why I’m so surprised that any front-end engineer would knowingly choose to swap out a solid declarative foundation like HTML for a more brittle scripting language. Or, as Simon put it:

Gizmodo launches redesign, is no longer a website (try visiting with JS disabled): http://gizmodo.com/

Read Mike’s article, re-read this article on URL design and listen to what John Resig has to say in this interview .

The Future of Web Apps, day one

I’m spending more time in London than in Brighton this week. After BarCamp London 2 at the weekend I had one day to recover and now I’m back up for the Future of Web Apps conference.

Like last year, the event is being held in the salubrious surroundings of Kensington; normally the home turf of Sloane Rangers, now overrun by geeks. But the geeks here are generally of a different variety to those at BarCamp (although I’m seeing a lot of familiar faces from the weekend).

The emphasis of the conference this time is more on the business, rather than the techy side of things. It makes sense to focus the event this way, especially now that there’s a separate Future of Web Design conference in a few months. The thing is… I don’t have much of a head for business (to put it mildy) so a lot of the material isn’t really the kind of thing I’m interested in. That’s not to say that it isn’t objectively interesting but from my subjective viewpoint, words like “venture”, “investment” and “business model” tend to put me to sleep.

That said, the presentations today have been less soporific than I feared. There was some good geeky stuff from Werner Vogels of Amazon and Bradley Horowitz of Yahoo, as well as some plain-talkin’ community advice from Tara Hunt.

The big disappointment of the day has been WiFi. Despite the fact that Ryan paid £6,000—remember, he’s not afraid of announcing figures in public—nothin’s doin’. For all the kudos that BT deserve for hosting the second London BarCamp, they lose some karma points for this snafu.

The day ended with Kevin Rose giving the Digg annual report. He left time for some questions so I put this to him:

I see Digg as a technological success and a business success but I think it’s a social failure. That’s because when I read the comments attached to a story, people are behaving like assholes.

At this point, people started applauding. I was mortified! I wasn’t trying get in a cheap shot at Digg; I had a point to make. So after informing the crowd that there was nothing to applaud, I continued:

This is probably because of the sheer size of the community on Digg. Contrast this to something like Flickr where there are lots and lots of separate groups. My question is; should you be trying to deliberately fragment Digg?

The answer was a resounding “Yes!” and it’s something that he touched on his talk. Afterwards, I was talking to Daniel Burka and he reckoned that Digg could take a leaf out of Last.fm’s book. The guys from Last.fm had previously talked about all the great features they were able to roll out by mining the wealth of attention data that users are submitting every day. Digg has an equally rich vein of data; they just need to mine it.

Anyway, it was a good day all in all but I feel kind of bad for putting a sour note on the Digg presentation. Plenty of people told me “great question!” but I felt a bit ashamed for putting Kevin on the spot that way.

Still, it’s far preferable to make these points in meatspace. If I had just blogged my concerns, it would have been open to even more misinterpretation. That’s the great thing about conferences: regardless of whether the subject matter is my cup of tea or not, the opportunity to meet and chat with fellow geeks is worth the price of entry.

d.Construct travel news

If you’re planning to come down to d.Construct tomorrow morning on a train from London, you might want to rethink your travel plans:

A train drivers’ union will decide later whether to go ahead with a major strike which is set to cripple services across southern England.

There’s always the bus, though that takes considerably longer. Or you could just come down tonight and go to the pub.

Update: The strike has been called off! Praise Jeebus!

The ugly American

I’m sitting in a big room at XTech 2006 listening to Paul Graham talk about why there aren’t more start-ups in Europe. It’s essentially a Thatcherite screed about why businesses should be able to get away with doing anything they want and treat employees like slaves.

In comparing Europe to the US, Guru Graham points out that the US has a large domestic market. Fair point. The EU — designed to be one big domestic market — suffers, he feels, by the proliferation of languages. However, he also thinks that it won’t be long before Europe is all speaking one language — namely, his. In fact, he said

Even French and German will go the way of Luxembourgish and Irish — spoken only in kitchens and by eccentric nationalists.

What. A. Wanker.

Update: Just to clarify for the Reddit geeks, here’s some context. I’m from Ireland. I speak Irish, albeit not fluently. I’m calling Paul Graham a wanker because I feel personally insulted by his inflammatory comment about speakers of the Irish language. I’m not insulted by his opinions on start-ups or economics or language death — although I may happen to disagree with him. I’m responding as part of the demographic he insulted. If he just said the Irish language will die out, I wouldn’t have got upset. He crossed a line by insulting a group of people — a group that happened to include someone in the audience he was addressing — instead of simply arguing a point or stating an opinion. In short, he crossed the line from simply being opinionated to being a wanker.