Tags: technology



Exploring web technologies

Last week, I had two really enjoyable experiences discussing completely opposite ends of the web technology stack.

Tuesday is Codebar day here in Brighton. Clearleft hosted it at 68 Middle Street last week. I really, really enjoy coaching at Codebar. I particularly like teaching the absolute basics of HTML and CSS. There’s something so rewarding about seeing the “a-ha!” moments when concepts click with people. I also love answering the inevitable questions that arise, like “why is it like that?”, or “how do I do this?”

Fantastic coding tonight! Great to see you all. Thanks for coming and thanks @68MiddleSt & @clearleft for having us.

Thursday was devoted to the opposite end of the spectrum. I ran a workshop at Clearleft with some developers from one of our clients. The whole day was dedicated to exploring and evaluating up-and-coming web technologies. Basically, it was a chance to geek out about all the stuff I’ve been linking to or writing about. During the workshop I ended up making a lot of use of my tagging system here on adactio.com:

Prioritising topics for discussion.

Web components and service workers ended up at the top of the list of technologies to tackle, which was fortuitous, given my recent thoughts on comparing the two:

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Those two questions turned out to be a good framework for the whole workshop. The question of how to evaluate technologies is something I’ve been thinking about a lot lately. I’m pretty sure it will be what my next conference talk is going to be all about.

You can read more about the structure of the workshop over on the Clearleft site. I’m looking forward to running it again sometime. But I’m equally looking forward to getting back to the basics at the next Codebar.

dConstruct 2015 podcast: Brian David Johnson

The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.

Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.

dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.

Grab your ticket today and use the code ‘ansible’ to take advantage of the 10% discount for podcast listeners.


Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Star wheels

This list has been making the rounds lately. It’s the list of (probably apocryphal) rules underlying the world of Road Runner and Wile E. Coyote. Design principles if you will. Like “The Road Runner cannot harm the Coyote except by going ‘meep, meep’” and “All tools, weapons, or mechanical conveniences must be obtained from the Acme Corporation.”

These are patterns that we are all subconsciously aware of anyway, but there’s something about seeing them enumerated that makes us go “oh, yeah” in recognition.

This reminds me of a silly idea I had when I was younger. It’s about Star Wars (of course). Specifically it’s about a possible rule—or design principle—underlying the kitbashed used-universe design of that galaxy far, far away.

Now I know this is going to sound crazy, but hear me out…

What if the wheel has never been invented in the world of Star Wars?

It’s probably not a deliberate omission, but we never actually see a single wheel in the original trilogy (the prequels, as always, are another matter entirely). Sure, there are wheels implied under the imperial mouse droid or under R2-D2’s legs but you never actually see them. Even the sandcrawler, which uses tracks, hides its internal workings.

Instead, this is a universe where everything travels via some kind of maglev antigravity even when it seems completely unnecessary—couldn’t you just slap a carbonite Han Solo on a gurney? Whenever a spaceship extends its landing gear we see …skids. Always skids. Never wheels. And what kind of mechanical engineer would actually design something like an AT-AT if it weren’t for a prohibition on wheels?

I know you’re probably thinking “this is so stupid”, but I bet you’re also trying to think of an explicit instance of a wheel in the original trilogy. You may also be feeling a growing urge to watch the films again. And whenever you do end up watching the trilogy again, and you find yourself looking at the undercarriage of every vehicle, you’ll realise that I’ve planted this idea Inception-like in your head.

Anyway, like I said, the prequels put paid to my little theory. I was genuinely disappointed when those droidekas rolled down that corridor. Remember that feeling of “oh, please!” when R2-D2 used his thrusters to fly in Attack Of The Clones? You felt cheated, right? The film was breaking the rules of its own universe. Well, a little part of me felt that way when my silly theory was squashed.

But just go with it here for a minute. Suppose the wheel had never been invented. Would it be possible for a space-faring civilisation to evolve? It’s generally assumed that you’d need to at least invent fire to achieve any kind of mechanical advances, but what about the wheel?

Imagine if George Lucas had actually been playing a design fiction long con. My younger self liked to imagine that lists of instructions were passed around ILM, along the same lines as those Road Runner rules. And one of those instructions would’ve been the cryptic injunction against showing wheels in any vehicle designs. Then imagine what it would have been like if, decades later, Lucas casually dropped the bombshell that the wheel was never invented in this galaxy far, far away. It would’ve blown. Our. Minds.

Ah, but it was just a dream. A crazy, apopheniac dream.

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.


There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.


You can listen to an audio version of Seams.

“The function of science fiction,” said Ray Bradbury, “is not only to predict the future, but to prevent it.”

Dystopias are the default setting for science fiction. It’s rare to find utopian sci-fi, and when you do—as in the post-singularity Culture novels of Iain M.Banks—there’s always more than a germ of dystopia; the dystutopias that Margaret Atwood speaks of.

You’ve got your political dystopias—1984 and all its imitators. Then there’s alien invasion dystopias, machine-intelligence dystopias, and a whole slew of post-apocalyptic dystopias: nuclear war, pandemic disease, environmental collapse, genetic engineering …take your pick. From the cosy catastrophes of John Wyndham to Cormac McCarthy’s The Road, this is the stock and trade of speculative fiction.

Of all these undesirable futures, one that troubles more than any other is the Wall·E dystopia. I’m not talking about the environmental wasteland depicted on Earth. I mean the dystutopia depicted aboard the generation starship The Axiom. Here, humanity’s every need is catered to without requiring any thought. And so humanity atrophies, becoming physically obese and intellectually lazy.

It’s not a new idea. H. G. Wells had already shown us a distant future like this in his classic novel The Time Machine. In the far future of that book’s timeline, humanity splits into two. The savagery of the canabalistic Morlocks is contrasted with the docile passive stupidity of the Eloi, but as Jaron Lanier points out, both endpoints are equally horrific.

In Wall·E, the Eloi have advanced technology. Their technology has been designed according to a design principle enshrined in the title of a Dead Kennedys album: Give Me Convenience Or Give Me Death.

That’s the reason why the Wall·E dystopia disturbs me so much. It’s all-too believable. For many years now, the rallying cry of digital designers has been epitomised by the title of Steve Krug’s terrific book, Don’t Make Me Think. But what happens when that rallying cry is taken too far? What happens when it stops being “don’t make think while I’m trying to complete a task” to simply “don’t make me think” full stop?

Convenience. Ease of use. Seamlessness.

On the face of it, these all seem like desirable traits in digital and physical products alike. But they come at a price. When we design, we try to do the work so that the user doesn’t have to. We do the thinking so the user doesn’t have to. Don’t make the user think. But taken too far, that mindset becomes dangerous.

Marshall McLuhan said that every extension is also an amputution. As we augment the abilities of people to accomplish their tasks, we should be careful not to needlessly curtail what they can do:

Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by doing so they have a tendency to restrict our options for accomplishing that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

I see this a lot in the world of web devlopment. We’re constantly faced with challenges like dealing with users on slow networks or small screens. So we try to come up with solutions (bandwidth media queries, responsive images) that have at their heart an assumption that we know better than the end user what they should get.

I’m not saying that everything should be an option in a menu for the user to figure out—picking smart defaults is very much part of our job. But I do think there’s real value in giving the user the final choice.

I remember Jake giving a good example of this. If he’s travelling and he’s on a 3G network on his phone, or using shitty hotel WiFi on his laptop, and someone sends him a link to a video of some cats, he doesn’t mind if he gets the low-quality version as long as he gets to see the feline shenanigans in short order. But if he’s in the same situation and someone sends him a link to the just-released trailer for the new Star Trek movie, he’s willing to wait for hours so that he can watch in high-definition.

That’s a choice. All too often, these kind of choices are pre-made by designers and developers instead of being offered to the end user. We probably mean well, but there’s a real danger in assuming that just because someone is using a particular device that we can infer what their context is:

Mind reading is no way to base fundamental content decisions.

My point is that while we don’t want to overwhelm the user with choice overload, we also need to be careful not to unintentionally remove valuable choices that can empower people. In our quest to make experiences seamless, we run the risk of also making those experiences rigid and inflexible.

The drive for a “seamless experience” has been used to justify some harsh amputations. When Twitter declared war on the very developers it used to champion, and changed its API and terms of service so that tweets had to be displayed the same way everywhere, it was done in the name of “a consistent user experience.” Twitter knows best.

The web is made up of parts and there are seams between those parts: HTML, HTTP, and URLs. The software that can expose or hide those seams is the web browser. Web browsers are made by human beings and it’s the mindset and assumptions of those human beings that determines whether web browsers are enabling or disabling users to make use of those seams.

“View source” is a seam that exposes the HTML lying beneath every web page. That kind of X-ray vision can be quite powerful. Clearly it’s not an important feature for most users, but it is directly responsible for showing people how web pages are made …and intimating that anyone can do it. In the introduction to my first book I thanked “view source” along with my other teachers like Jeff Veen, Steve Champeon, and Jeffrey Zeldman.

These days, browsers don’t like to expose “view source” as easily as they once did. It’s hidden amongst the developer tools. There’s an assumption there that it’s not intended for regular users. The browser makers know best.

There are seams between the technologies that make up a web page: HTML, CSS, and JavaScript. The ability to enable or disable those layers can be empowering. It has become harder and harder to disable JavaScript in the browser. Another little amputation. The browser makers know best.

The CSS that styles web pages can be over-ridden by the end user. This is not a bug. It is a very powerful feature. That feature is being removed:

I understand that vendors can do whatever they want to control how you experience the web, because it is their software, their product, but removing user stylesheets feels sooo un-web to me, which is irony. A browser’s largest responsibility is to give people access to the web. It’s like the web is this open hand, but software is this closed fist.

Then there’s the URL. The ultimate seam.

Historically, browsers have exposed this seam, but now—just as with “view source” and user stylesheets—the visibility of the URL is being relegated to being a power-user tool.

The ultimate amputation.

The irony here is that the justification for this change is not the usual mantra of providing “a more seamless user experience.” Instead, the justification is supposedly security.

This strike me as really strange. Security is the one area where seamlessness is definitely not a desirable characteristic. A secure system requires people to be mindful and aware of their situation. This is certainly true on the web, as Tom points out:

Hiding information away makes me less able to make decisions: it makes me a less informed user.

The whole reason that phishing is a problem is because users don’t pay any bloody attention to what they see in their location bar. Putting less information in the location bar makes the location bar less useful and thus there’s less point paying any attention to it.

Tom has hit on the fundamental mismatch here. Chrome is a piece of software that wants to provide a good user experience—“don’t make me think!”—while at the same trying to make users mindful of their surroundings:

Security requires educated, pro-active, informed thinking users.

Usability is about making the whole process of using the web seamless and thoughtless: a child should be able to do it.

So from the security standpoint, obfuscating the URL is exactly the wrong thing to do.

In order to actually stay safe online, you need to see the “seams” of the web, you need to pay attention, use your brain.

Chrome knows best.

Making it harder to “view source” might seem like an inconsequentail decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Make no mistake, all software is political. We talk about opinionated software but really, all software is opinionated, whether we like it or not. Seemingly inconsequential interface decisions are actually reflections of assumptions, biases and beliefs.

As Nat points out, like all political decisions, this is about power:

There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

If that’s the case, then it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

Welcome aboard The Axiom.

Buy n Large knows best.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Slow glass

The day that Opera announced that it was changing its browser to use the WebKit rendering engine, I was contacted by .net magazine for my opinion on the move. My response was:

I have no opinion on this right now.

Frankly, I’m always quite amazed at how others can form opinions so quickly. Sometimes opinions are formed and set on technologies before they’re even out and about in the world: little printers, Apple watches, Google glasses…

The case against Google Glass seemed to be a done deal after Mark Hurst published The Google Glass feature no one is talking about:

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them.

It’s a very persuasive piece of writing and it certainly gave me food for thought. Then Eric wrote Glasshouse:

Our youngest tends to wake up fairly early in the morning, at least as compared to his sisters, and since I need less sleep than Kat I’m usually the one who gets up with him. This morning, he put away a box he’d just emptied of toys and I told him, “Well done!” He turned to me, stuck his hand up in the air, and said with glee, “Hive!”

I gave him the requested high-five, of course, and then another for being proactive. It was the first time he’d ever asked for one. He could not have looked more pleased with himself.

And I suddenly realized that I wanted to be able to say to my glasses, “Okay, dump the last 30 seconds of livestream to permanent storage.”

Now I’ve got another interesting, persuasive perspective on the yet-to-be-released product.

Just as we can be very quick to label websites and social networks as dead (see Flickr), I worry if we’re often too quick to look for the worst aspects in any new technology.

Natalia has written a great piece called No, let’s not stop the cyborgs in reaction to the over-the-top Luddism of the Stop The Cyborgs movement:

Healthy criticism and skepticism towards technologies and their impact on society is necessary, but framing it in a way that discredits all people with body and sense enhancing technologies is othering.

Now we get in to the question of whether technology can be inherently “good” or “bad.” Kevin Kelly avoids such loaded terms, but he does ascribe some kind of biased trajectory to our tools in his book What Technology Wants.

Natalia writes:

It’s also important to remember that technologies themselves aren’t always ethically questionable. It’s what we do with them that can be positive or contribute to suffering and misery. Sometimes the same technology can be used to help people and to simultaneously ruin lives for profit.

A fair point, but one that is most commonly used by the pro-gun lobby—proponents of a technology that I personally find very hard to view as neutral.

But the point remains: we seem to have a natural impulse to immediately think of the worst that could happen with any new technology (though I’m just as impatient with techno-utopians as I am with techno-dystopians). I really enjoy watching Black Mirror but its central question grows wearisome after a while. That question is “What’s the worst that could happen?”

I am, once more, reminded of the danger of self-fulfilling prophesies when it comes to seeing the worst in technologies like Google Glass. As Matt Webb’s algorithm puts it:

It’s not the end of privacy because it’s all newly visible, it’s the end of privacy because it looks like it’s the end of privacy because it’s all newly visible.

I was chatting with fellow sci-fi fan Jon Tan about Kim Stanley Robinson, whose work I (shamefully) haven’t dived into yet. Jon told me that a good starting point would be the Three Californias trilogy. It consists of one utopia, one dystopia, and one apocalypse. I like the sound of that.

Those who take an anti-technology stance, or at least an overly-negative stance on technology, are often compared to the Amish. But as Stewart Brand is quick to point out, the Amish don’t reject technology—instead, they take their time in deciding whether a new technology will, on balance, be better or worse for their society in the long term:

The Amish seek to master technology rather than become its slave.

I think that techno-utopians and -dystopians alike can appreciate that.


In 2005 I went to South by Southwest for the first time. It was quite an experience. Not only did I get to meet lots of people with whom I had previously only interacted with online, but I also got to meet lots of lots of new people. Many of my strongest friendships today started in Austin that year.

Back before it got completely unmanageable, Southby was a great opportunity to mix up planned gatherings with serendipitous encounters. Lunchtime, for example, was often a chaotic event filled with happenstance: you could try to organise a small group to go to a specific place, but it would inevitably spiral into a much larger group going to wherever could seat that many people.

One lunchtime I found myself sitting next to a very nice gentleman and we got on to the subject of network theory. Back then I was very obsessed with small-world networks, the strength of weak ties, and all that stuff. I’m still obsessed with all that stuff today, but I managed to exorcise a lot my thoughts when I gave my 2008 dConstruct talk, The System Of The World. After giving that magnum opus, I felt like I had got a lot of network-related stuff off my chest (and off my brain).

Anyway, back in 2005 I was still voraciously reading books on the subject and I remember recommending a book to that nice man at that lunchtime gathering. I can’t even remember which book it was now—maybe Nexus by Mark Buchanan or Critical Mass by Philip Ball. In any case, I remember this guy making a note of the book for future reference.

It was only later that I realised that that “guy” was David Isenberg. Yes, that David Isenberg, author of the seminal Rise of the Stupid Network, one of the most important papers ever published about telecommunications networks in the twentieth century (you can watch—and huffduff—a talk he gave called Who will run the Internet? at the Oxford Internet Institute a few years back).

I was reminded of that lunchtime encounter from seven years ago when I was putting together a readlist of visionary articles today. The list contains:

  1. As We May Think by Vannevar Bush
  2. Information Management: A Proposal by Tim Berners-Lee (vague but exciting!)
  3. Rise of the Stupid Network by David Isenberg
  4. There’s Plenty of Room at the Bottom by Richard Feynman
  5. The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge

There are others that should be included on that list but there’s are the ones I could find in plain text or HTML rather than PDF.

Feel free to download the epub file of those five articles together and catch up on some technology history on your Kindle, iPad, iPhone or other device of your choosing.


After speaking at Go Beyond Pixels in St. John’s, I had some time to explore Newfoundland a little bit. Geri was kind enough to drive me to a place I really wanted to visit: the cable station at Heart’s Content.

Heart's Content Cable Station

I’ve wanted to visit Heart’s Content (and Porthcurno in Cornwall) ever since reading The Victorian Internet, a magnificent book by Tom Standage that conveys the truly world-changing nature of the telegraph. Heart’s Content plays a pivotal role in the story: the landing site of the transatlantic cable, spooled out by the Brunel-designed Great Eastern, the largest ship in the world at the time.

Recently I was sent an advance reading copy of Tubes by Andrew Blum. It makes a great companion piece to Standage’s book as Blum explores the geography of the internet:

For all the talk of the placelessness of our digital age, the Internet is as fixed in real, physical places as any railroad or telephone system ever was.

There’s an interview with Andrew Blum on PopTech, a review of Tubes on Brain Pickings, and I’ve huffduffed a recent talk by Andrew Blum in Philadelphia.

Andrew Blum | Tubes: A Journey to the Center of the Internet - Free Library Podcast on Huffduffer

Now there are more places I want to visit: the nexus points on TeleGeography’s Submarine Cable Map; the hubs of Hibernia Atlantic, whose about page reads like a viral marketing campaign for some soon-to-be-released near-future Hollywood cyberpunk thriller.

I’ve got the kind of travel bug described by Neal Stephenson in his classic 1996 Wired piece Mother Earth Mother Board:

In which the hacker tourist ventures forth across the wide and wondrous meatspace of three continents, acquainting himself with the customs and dialects of the exotic Manhole Villagers of Thailand, the U-Turn Tunnelers of the Nile Delta, the Cable Nomads of Lan tao Island, the Slack Control Wizards of Chelmsford, the Subterranean Ex-Telegraphers of Cornwall, and other previously unknown and unchronicled folk; also, biographical sketches of the two long-dead Supreme Ninja Hacker Mage Lords of global telecommunications, and other material pertaining to the business and technology of Undersea Fiber-Optic Cables, as well as an account of the laying of the longest wire on Earth, which should not be without interest to the readers of Wired.

Maybe one day I’ll get to visit the places being designed by Sheehan Partners, currently only inhabited by render ghosts on their website (which feels like it’s part of the same subversive viral marketing campaign as the Hibernia Atlantic site).

Perhaps I can find a reason to stop off in Ashburn, Virginia or The Dalles, Oregon, once infamous as the site of a cult-induced piece of lo-tech bioterrorism, now the site of Google’s Project 02. Not that there’s much chance of being allowed in, given Google’s condescending attitude when it comes to what they do with our data: “we know what’s best, don’t you trouble your little head about it.”

It’s that same attitude that lurks behind that most poisonous of bullshit marketing terms…

The cloud.

What a crock of shit.

The cloud is a lie

Whereas other bullshit marketing terms once had a defined meaning that has eroded over time due to repeated use and abuse—Ajax, Web 2.0, HTML5, UX—“the cloud” is a term that sets out to deceive from the outset, imbued with the same Lakoffian toxicity as “downsizing” or “friendly fire.” It is the internet equivalent of miasma theory.

Death to the cloud! Long live the New Flesh of servers, routers, wires and cables.

What technology wants

Technology enabled Sarah Churman to hear for the first time.

enabled her to capture that moment.

Networked technology enabled her to share that moment with the world.

enabled me to share it with you.

Reading the street

Like many others, I was the grateful recipient of a Kindle this Christmas. I’m enjoying having such a lightweight reading device and I’m really enjoying the near-ubiquitous free connectivity that comes with the 3G version.

I can’t quite bring myself to go on a spending spree for overpriced DRM’d books with shoddy layout and character encoding, so I’ve been getting into the swing of things with the freely-available works of Cory Doctorow. I thoroughly enjoyed For The Win—actually, I read that one on my iPod Touch—and I just finished Makers on the Kindle.

The plot rambles somewhat but it’s still an entertaining near-future scenario of hardware hackers creating and destroying entire business models through the ever-decreasing cost and ever-increasing power of street-level technology.

Cracking open the case of a particularly convincing handset, he offers advice on identifying a fake: a hologram stuck on the phone’s battery is usually a good indication that the product is genuine. Two minutes later, Chipchase approaches another stall. The shopkeeper, a middle-aged woman, leans forward and offers an enormous roll of hologram stickers.

Chipchase, mouth agape, takes out the Canon 5D camera that he uses to catalogue almost everything he sees. “What are these for?” he asks, firing off a dozen photographs in quick succession. “You stick them on batteries to make them look real,” she says, with a shrug. Chipchase smiles, revelling in the discovery. “I love this!” he yelps in delight, and thanks the shopkeeper before heading off to examine the next stall.

That isn’t a passage from Makers. That’s from a Wired magazine article by Bobbie: a profile of Jan Chipchase and his predilection for ; counterfeit electronic goods on the streets of Shanghai …not unlike the Bambook Kindle clone.

Analogue Inception

I don’t usually get all that excited about forthcoming films, but ever since seeing I’ve been like a kid at Christmas time. Everything about it looked like it was going to press all my buttons.

I went to see it on its first day of release at the lovely Duke of York’s cinema. It didn’t disappoint. If anything, it exceeded my ludicrously high expectations.

The structure of the film is that of a heist movie, but if the film were to be slotted into a genre, that genre would have to be science fiction. Personally, I would say it’s . But it’s a strange kind of cyberpunk where the emphasis is less on technology and more on the film-noir mood and transcendental possibilities of the genre.

In fact, technology in Inception is notable by its absence. There is a piece of hardware to enable the central premise of the film, but it’s of no more importance than the hardware used in —the last great science fiction film to cover similar territory.

Both films also avoid making any reference to specific dates. We assume that the narrative plays out in the very near future but we’re never explicitly told that. It strikes me that both films are attempting to place the action in a kind of continuous present.

Inception is particularly adept at avoiding anything that would date the film. Nothing dates a story quite like technology. has remarked on numerous occasions that the glaring omission of cell phones in dates the book to the 1980s …although younger people assume that the omission is a deliberate plot point.

Computers make no appearance in Inception. The unstoppable momentum of means that this year’s cutting edge laptop may appear laughably out of date by the time the film is available on DVD (and my reference to a specific storage medium like DVD dates these words).

Christopher Nolan goes further and avoids the use of digital input and output devices: the mouse, the keyboard, the screen (either LCD or cathode ray) …all of these things anchor a narrative to a specific period. Instead, there is almost a fetishisation of the analogue. When we see people planning and prototyping in Inception, it is with paper and cardboard rather than any computer-aided design tools.

It’s slightly jarring when the occasional piece of technology appears on the screen, such as an electronic key card for a hotel room door, or the electronic fingerprinting device used at American airports.

Analogue objects age too, of course, but the rate of ageing is slower. To borrow a term from architecture—and boy, is Inception a fun film from that perspective—the analogue and the digital are different :

The Shearing layers concept views buildings as a set of components that evolve in different timescales.

Sound familiar? It’s a concept that’s at the heart of Inception’s dream logic: the idea that the passage of time slows down within a dream, allowing a far longer narrative to play out in a dream world than in the faster-moving “reality” of the dreamer.

Inception takes pains to use the medium- to long-term obsolescence of physical objects: trains, planes, cars, guns and—above all—buildings. The film neatly sidesteps the inevitable timestamp that electronic technology would impart on the narrative.

Inception is a film that will stand the test of time remarkably well. The phrase “timeless classic” is one that gets bandied about far too freely, but in this case it could well turn out to be the literal truth.

Update: Adrian Sevitz points out that Inception is also remarkably lacking in product placement, or branded products in general. It’s true: I can’t recall seeing a single logo in the film. That’s something that has dogged with its unfortunate choice of brand extrapolation: Pan-am, Atari, Bell…

Further reading on the nanotechnology of the semantic web

For those of you who attended my XTech talk yesterday (and, indeed, for those of you who didn’t), here are a few jumping off points I mentioned: