Tags: culture



Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle


Here in the UK, there’s a “newspaper”—and I use the term advisedly—called The Sun. In longstanding tradition, page 3 of The Sun always features a photograph of a topless woman.

To anyone outside the UK, this is absolutely bizarre. Frankly, it’s pretty bizarre to most people in the UK as well. Hence the No More Page 3 campaign which seeks to put pressure on the editor of The Sun to ditch their vestigal ’70s sexism and get with the 21st Century.

Note that the campaign is not attempting to make the publication of topless models in a daily newspaper illegal. Note that the campaign is not calling for top-down censorship from press regulators. Instead the campaign asks only that the people responsible reassess their thinking and recognise the effects of having topless women displayed in what is supposedly a family newspaper.

Laura Bates of the Everyday Sexism project has gathered together just some examples of the destructive effects of The Sun’s page 3. And sure, in this age of instant access to porn via the internet, an image of a pair of breasts might seem harmless and innocuous, but it’s the setting for that image that wreaks the damage:

Being in a national newspaper lends these images public presence and, more harmfully for young people, the perception of mainstream cultural approval. Our society, through Page 3, tells both girls and boys ‘that’s what women are’.

Simply put, having this kind of objectification in a freely-available national newspaper normalises it. When it’s socially acceptable to have a publication like The Sun in a workplace, then it’s socially acceptable for that same workplace to have the accompanying air of sexism.

That same kind of normalisation happens in online communities. When bad behaviour is tolerated, bad behaviour is normalised.

There are obvious examples of online communities where bad behaviour is tolerated, or even encouraged: 4Chan, Something Awful. But as long as I can remember, there have also been online communites that normalise abhorrent attitudes, and yet still get a free pass (usually because the site in question would deliver bucketloads of traffic …as though that were the only metric that mattered).

It used to be Slashdot. Then it was Digg. Now it’s Reddit and Hacker News.

In each case, the defence of the bad behaviour was always explained by the sheer size of the community. “Hey, that’s just the way it is. There’s nothing can be done about it.” To put it another way …it’s normal.

But normality isn’t an external phenomenon that exists in isolation. Normality is created. If something is perceived as normal—whether that’s topless women in a national newspaper or threatening remarks in an online forum—that perception is fueled by what we collectively accept to be “normal”.

Last year, Relly wrote about her experience at a conference:

Then there was the one comment I saw in a live irc style backchannel at an event, just after I came off stage. I wish I’d had the forethought to screenshot it or something but I was so shocked, I dropped my laptop on the table and immediately went and called home, to check on my kids.


Because the comment said (paraphrasing) “This talk was so pointless. After she mentioned her kids at the beginning I started thinking of ways to hunt them down and punish her for wasting my time here.”

That’s a horrible thing for anyone to say. But I can understand how someone would think nothing of making a remark like that …if they began their day by reading Reddit or Hacker News. If you make a remark like that there, nobody bats an eyelid. It’s normal.

So what do we do about that? Do we simply accept it? Do we shrug our shoulders and say “Oh, well”? Do we treat it like some kind of unchangeable immovable force of nature; that once you have a large online community, bad behaviour should be accepted as the default mode of discourse?


It’s hard work. I get that. Heck, I run an online community myself and I know just how hard it is to maintain civility (and I’ve done a pretty terrible job of it in the past). But it’s not impossible. Metafilter is a testament to that.

The other defence of sites like Reddit and Hacker News is that it’s unfair to judge the whole entity based purely on their worst episodes. I don’t buy that. The economic well-being of a country shouldn’t be based on the wealth of its richest citizens—or even the wealth of its average citizens—but its poorest.

That was precisely how Rebecca Watson was shouted down when she tried to address Reddit’s problems when she was on a panel at South by Southwest last year:

Does the good, no matter if it’s a fundraiser for a kid with cancer or a Secret Santa gift exchange, negate the bigotry?

Like I said, running an online community is hardDerek’s book was waaaay ahead of its time—but it’s not impossible. If we treat awful behaviour as some kind of unstoppable force that can’t be dealt with, then what’s the point in trying to have any kind of community at all?

Just as with the No More Page 3 campaign, I’m not advocating legal action or legislative control. Instead, I just want some awareness that what we think of as normal is what we collectively decide is normal.

I try not to be a judgemental person. But if I see someone in public with a copy of The Sun, I’m going to judge them. And no, it’s not a class thing: I just don’t consider misogyny to be socially acceptable. And if you participate in Reddit or Hacker News …well, I’m afraid I’m going to judge you too. I don’t consider it socially acceptable.

Of course my judgemental opinion of someone doesn’t make a blind bit of difference to anybody. But if enough of us made our feelings clear, then maybe slowly but surely, there might be a shift in feeling. There might just be a small movement of the needle that calibrates what we think of normal in our online communities.

A map to build by

The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.

It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.

There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.

Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.

I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.

Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.

I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.

I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.

So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?

Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:

How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?

He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.

I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.

Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Slow glass

The day that Opera announced that it was changing its browser to use the WebKit rendering engine, I was contacted by .net magazine for my opinion on the move. My response was:

I have no opinion on this right now.

Frankly, I’m always quite amazed at how others can form opinions so quickly. Sometimes opinions are formed and set on technologies before they’re even out and about in the world: little printers, Apple watches, Google glasses…

The case against Google Glass seemed to be a done deal after Mark Hurst published The Google Glass feature no one is talking about:

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them.

It’s a very persuasive piece of writing and it certainly gave me food for thought. Then Eric wrote Glasshouse:

Our youngest tends to wake up fairly early in the morning, at least as compared to his sisters, and since I need less sleep than Kat I’m usually the one who gets up with him. This morning, he put away a box he’d just emptied of toys and I told him, “Well done!” He turned to me, stuck his hand up in the air, and said with glee, “Hive!”

I gave him the requested high-five, of course, and then another for being proactive. It was the first time he’d ever asked for one. He could not have looked more pleased with himself.

And I suddenly realized that I wanted to be able to say to my glasses, “Okay, dump the last 30 seconds of livestream to permanent storage.”

Now I’ve got another interesting, persuasive perspective on the yet-to-be-released product.

Just as we can be very quick to label websites and social networks as dead (see Flickr), I worry if we’re often too quick to look for the worst aspects in any new technology.

Natalia has written a great piece called No, let’s not stop the cyborgs in reaction to the over-the-top Luddism of the Stop The Cyborgs movement:

Healthy criticism and skepticism towards technologies and their impact on society is necessary, but framing it in a way that discredits all people with body and sense enhancing technologies is othering.

Now we get in to the question of whether technology can be inherently “good” or “bad.” Kevin Kelly avoids such loaded terms, but he does ascribe some kind of biased trajectory to our tools in his book What Technology Wants.

Natalia writes:

It’s also important to remember that technologies themselves aren’t always ethically questionable. It’s what we do with them that can be positive or contribute to suffering and misery. Sometimes the same technology can be used to help people and to simultaneously ruin lives for profit.

A fair point, but one that is most commonly used by the pro-gun lobby—proponents of a technology that I personally find very hard to view as neutral.

But the point remains: we seem to have a natural impulse to immediately think of the worst that could happen with any new technology (though I’m just as impatient with techno-utopians as I am with techno-dystopians). I really enjoy watching Black Mirror but its central question grows wearisome after a while. That question is “What’s the worst that could happen?”

I am, once more, reminded of the danger of self-fulfilling prophesies when it comes to seeing the worst in technologies like Google Glass. As Matt Webb’s algorithm puts it:

It’s not the end of privacy because it’s all newly visible, it’s the end of privacy because it looks like it’s the end of privacy because it’s all newly visible.

I was chatting with fellow sci-fi fan Jon Tan about Kim Stanley Robinson, whose work I (shamefully) haven’t dived into yet. Jon told me that a good starting point would be the Three Californias trilogy. It consists of one utopia, one dystopia, and one apocalypse. I like the sound of that.

Those who take an anti-technology stance, or at least an overly-negative stance on technology, are often compared to the Amish. But as Stewart Brand is quick to point out, the Amish don’t reject technology—instead, they take their time in deciding whether a new technology will, on balance, be better or worse for their society in the long term:

The Amish seek to master technology rather than become its slave.

I think that techno-utopians and -dystopians alike can appreciate that.

To CERN with love

I went to Switzerland yesterday. More specifically, Geneva. More specifically, CERN. More specifically, ATLAS. Tireless Communications Officer Claudia Marcelloni went out of her way to make sure that I had a truly grand tour of life at CERN.

Claudia at the Globe Control room

CERN is the ultimate area of overlap in the Venn diagram of geek interests: the place where the World Wide Web was invented while people were working on cracking the secrets of the universe.

I saw the world’s first web server—Tim Berners-Lee’s NeXT machine. I saw the original proposal for the World Wide Web, complete with the note scribbled across the top “vague but exciting.”

The first web server Information Management: A Proposal

But I understand what James meant when he described the whole web thing as a sideshow to the main event:

Because, you know the web is cool and all, but when you’re trying to understand the fundamental building blocks of the universe and constructing the single greatest scientific instrument of ours and perhaps any civilisation, the whole modern internet is a happy side effect, it is a nice to have.

The highlight of my day was listening to Christoph Rembser geek out about his work: hunting for signs of elusive dark matter by measuring missing momentum when smashing particles together near the speed of light in a 27 kilometre wide massive structure 100 metres underneath France and Switzerland, resulting in incredible amounts of data being captured and stored within an unimaginably short timescale. Awesome. Literally, awesome.

Christoph geeking out Dr. Christoph Rembser

But what really surprised me at CERN wasn’t learning about the creation of the web or learning about the incredible scientific work being done there. As a true-blooded web/science nerd, I had already read plenty about both. No, what really took me by surprise was the social structure at CERN.

According to most established social and economic theory, nothing should ever get done at CERN. It’s a collection of thousands of physics nerds—a mixture of theorists (the ones with blackboards) and experimentalists (the ones with computers). When someone wants to get something done, they present their ideas and ask for help from anyone with specific fields of expertise. Those people, if they like the sound of the idea, say “Okay” and a new collaboration is born.

That’s it. That’s how stuff gets done. It’s like a massive multiplayer hackday. It’s like the ultimate open source project (and yes, everything, absolutely everything, done at CERN is realised publicly). It is the cathedral and it is the bazaar. It is also the tower of Babel: people from everywhere in the world come to this place and collaborate, communicating any way they can. In the canteen, where Nobel prize winners sit with students, you can hear a multitude of accents and languages.

CERN is an amazing place. These thousands of people might be working on completely different projects, but there’s a shared understanding and a shared ethos amongst every one of them. That might sound like a flimsy basis for any undertaking, but it works. It works really, really well. And this isn’t just any old undertaking—they’re not making apps or shipping consumer products—they’re working on the most important questions that humans have ever attempted to answer. And they’re doing it all within a framework that, according to conventional wisdom, just shouldn’t work. But it does work. And that, in its own way, is also literally awesome.

Christoph described what it was like for him to come to CERN from Bonn, the then-capital of West Germany. It was 1989, a momentous year (and not just because Tim Berners-Lee wrote Information Management: A Proposal). Students were demonstrating and dying in Tiananmen Square. The Berlin wall was coming down (only later did I realise that my visit to CERN took place on October 3rd, Tag der Deutschen Einheit). At CERN, Christoph met Chinese students, Russian scientists, people from all over the world transcending their political differences to collaborate on truly fundamental questions. And he said that when people returned to their own countries, they surely carried with them some of that spirit that they had experienced together at CERN.

Compared to the actual work going on at CERN, that idea is a small one. It may not be literally awesome …but it really resonated with me.

I think I understand a little better now where the web comes from.

I approve of this message


It’s hard to believe that it’s been half a decade since The Show from Ze Frank graced our tubes with its daily updates. Five years ago to the day, he recorded the greatest three minutes of speech ever committed to video.

In the midst of his challenge to find the ugliest MySpace page ever, he received this comment:

Having an ugly Myspace contest is like having a contest to see who can eat the most cheeseburgers in 24 hours… You’re mocking people who, for the most part, have no taste or artistic training.

Ze’s response is a manifesto to the democratic transformative disruptive power of the web. It is magnificent.

In Myspace, millions of people have opted out of pre-made templates that “work” in exchange for ugly. Ugly when compared to pre-existing notions of taste is a bummer. But ugly as a representation of mass experimentation and learning is pretty damn cool.

Regardless of what you might think, the actions you take to make your Myspace page ugly are pretty sophisticated. Over time as consumer-created media engulfs the other kind, it’s possible that completely new norms develop around the notions of talent and artistic ability.

Spot on.

That’s one of the reasons why I dread the inevitable GeoCities-style shutdown of MySpace. Let’s face it, it’s only a matter of time. And when it does get shut down, we will forever lose a treasure trove of self-expression on a scale never seen before in the history of the planet. That’s so much more important than whether it’s ugly or not. As Phil wrote about the ugly and neglected fragments of Geocities:

GeoCities is an awful, ugly, decrepit mess. And this is why it will be sorely missed. It’s not only a fine example of the amateur web vernacular but much of it is an increasingly rare example of a period web vernacular. GeoCities sites show what normal, non-designer, people will create if given the tools available around the turn of the millennium.

Substitute MySpace for GeoCities and you get an idea of the loss we are facing.

Let’s not make the same mistake twice.

Voice of the Beeb hive

Ian Hunter at the BBC has written a follow-up post to his initial announcement of the plans to axe 172 websites. The post is intended to clarify and reassure. It certainly clarifies, but it is anything but reassuring.

He clarifies that, yes, these websites will be taken offline. But, he reassures us, they will be stored …offline. Not on the web. Without URLs. Basically, they’ll be put in a hole in the ground. But it’s okay; it’s a hole in the ground operated by the BBC, so that’s alright then.

The most important question in all of this is why the sites are being removed at all. As I said, the BBC’s online mothballing policy has—up till now—been superb. Well, now we have an answer. Here it is:

But there still may come a time when people interested in the site are better served by careful offline storage.

There may be a parallel universe where that sentence makes sense, but it would have to be one in which the English language is used very differently.

As an aside, the use of language in the “explanation” is quite fascinating. The post is filled with the kind of mealy-mouthed filler words intended to appease those of us who are concerned that this is a terrible mistake. For example, the phrase “we need to explore a range of options including offline storage” can be read as “the sites are going offline; live with it.”

That’s one of the most heartbreaking aspects of all of this: the way that it is being presented as a fait accompli: these sites are going to be ripped from the fabric of the network to be tossed into a single offline point of failure and there’s nothing that we—the license-payers—can do about it.

I know that there are many people within the BBC who do not share this vision. I’ve received some emails from people who worked on some of the sites scheduled for deletion and needless to say, they’re not happy. I was contacted by an archivist at the BBC, for whom this plan was unwelcome news that he first heard about here on adactio.com. The subsequent reaction was:

It was OK to put a videotape on a shelf, but putting web pages offline isn’t OK.

I hope that those within the BBC who disagree with the planned destruction will make their voices heard. For those of us outside the BBC, it isn’t clear how we can best voice our concerns. You could make a complaint to the BBC, though that seems to be intended more for complaints about programme content.

In the meantime, you can download all or some of the 172 sites and plop them elsewhere on the web. That’s not an ideal solution—ideally, the BBC shouldn’t be practicing a deliberate policy of link rot—but it allows us to prepare for the worst.

I hope that whoever at the BBC has responsibility for this decision will listen to reason. Failing that, I hope that we can get a genuine explanation as to why this is happening, because what’s currently being offered up simply doesn’t cut it. Perhaps the truth behind this decision lies not so much with the BBC, but with their technology partner, Siemens, who have a notorious track record for shafting the BBC, charging ludicrous amounts of money to execute the most trivial of technical changes.

If this decision is being taken for political reasons, I would hope that someone at the BBC would have the honesty to say so rather than simply churning out more mealy-mouthed blog posts devoid of any genuine explanation.


Yesterday’s account of the BBC’s decision to cull 172 websites caused quite a stir on Twitter.

Most people were as saddened as I was, although Emma described my post as being “anti-BBC.” For the record, I’m a big fan of the BBC—hence my disappointment at this decision. And, also for the record, I believe anyone should be allowed to voice their criticism of an organisational decision without being labelled “anti” said organisation …just as anyone should be allowed to criticise a politician without being labelled unpatriotic.

It didn’t take long for people to start discussing an archiving effort, which was heartening. I started to think about the best way to coordinate such an effort; probably a wiki. As well as listing handy archiving tools, it could serve as a place for people to claim which sites they want to adopt, and point to their mirrors once they’re up and running. Marko already has a head start. Let’s do this!

But something didn’t feel quite right.

I reached out to Jason Scott for advice on coordinating an effort like this. He has plenty of experience. He’s currently trying to figure out how to save the more than 500,000 videos that Yahoo is going to delete on March 15th. He’s more than willing to chat, but he had some choice words about the British public’s relationship to the BBC:

This is the case of a government-funded media group deleting. In other words, this is something for The People, and by The People I mean The Media and the British and the rest to go HEY BBC STOP

He’s right.

Yes, we can and should mirror the content of those 172 sites—lots of copies keep stuff safe—but fundamentally what we want is to keep the fabric of the web intact. Cool URIs don’t change.

The BBC has always been an excellent citizen of the web. Their own policy on handling outdated content explains the situation beautifully:

We don’t want to delete pages which users may have bookmarked or linked to in other ways.

Moving a site to a different domain will save the content but it won’t preserve the inbound connections; the hyperlinks that weave the tapestry of the web together.

Don’t get me wrong: I love the Internet Archive. I think that is doing fantastic work. But let’s face it; once a site only exists in the archive, it is effectively no longer a part of the living web. Yet, whenever a site is threatened with closure, we invoke the Internet Archive as a panacea.

So, yes, let’s make and host copies of the 172 sites scheduled for termination, but let’s not get distracted from the main goal here. What we are fighting against is .

I don’t want the BBC to take any particular action. Quite the opposite: I want them to continue with their existing policy. It will probably take more effort for them to remove the sites than to simply let them sit there. And let’s face it, it’s not like the bandwidth costs are going to be a factor for these sites.

Instead, many believe that the BBC’s decision is politically motivated: the need to be seen to “cut” top level directories, as though cutting content equated to cutting costs. I can’t comment on that. I just know how I feel about the decision:

I don’t want them to archive it. I just want them to leave it the fuck alone.

“What do we want?” “Inaction!”

“When do we want it?” “Continuously!”

Facing the future

There is much hand-wringing in the media about the impending death of journalism, usually blamed on the rise of the web or more specifically bloggers. I’m sympathetic to their plight, but sometimes journalists are their own worst enemy, especially when they publish badly-researched articles that fuel moral panic with little regard for facts (if you’ve ever been in a newspaper article yourself, you’ll know that you’re lucky if they manage to spell your name right).

Exhibit A: an article published in The Guardian called How I became a Foursquare cyberstalker. Actually, the article isn’t nearly as bad as the comments, which take ignorance and narrow-mindedness to a new level.

Fortunately Ben is on hand to set the record straight. He wrote Concerning Foursquare and communicating privacy. Far from being a lesser form of writing, this blog post is more accurate than the article it is referencing, helping to balance the situation with a different perspective …and a nice big dollop of facts and research. Ben is actually quite kind to The Guardian article but, in my opinion, his own piece is more interesting and thoughtful.

Exhibit B: an article by Jeffrey Rosen in The New York Times called The Web Means the End of Forgetting. That’s a bold title. It’s also completely unsupported by the contents of the article. The article contains anecdotes about people getting into trouble about something they put on the web, and—even though the consequences for that action played out in the present—he talks about the permanent memory bank of the Web and writes:

The fact that the Internet never seems to forget is threatening, at an almost existential level, our ability to control our identities.

Bollocks. Or, to use the terminology of Wikipedia, citation needed.

Scott Rosenberg provides the necessary slapdown, asking Does the Web remember too much — or too little?:

Rosen presents his premise — that information once posted to the Web is permanent and indelible — as a given. But it’s highly debatable. In the near future, we are, I’d argue, far more likely to find ourselves trying to cope with the opposite problem: the Web “forgets” far too easily.

Exactly! I get irate whenever I hear the truism that the web never forgets presented without any supporting data. It’s right up there with eskimos have fifty words for snow and people in the middle ages thought that the world was flat. These falsehoods are irritating at best. At worst, as is the case with the myth of the never-forgetting web, the lie is downright dangerous. As Rosenberg puts it:

I’m a lot less worried about the Web that never forgets than I am about the Web that can’t remember.

That’s a real problem. And yet there’s no moral panic about the very real threat that, once digitised, our culture could be in more danger of being destroyed. I guess that story doesn’t sell papers.

This problem has a number of thorns. At the most basic level, there’s the issue of . I love the fact that the web makes it so easy for people to publish anything they want. I love that anybody else can easily link to what has been published. I hope that the people doing the publishing consider the commitment they are making by putting a linkable resource on the web.

As I’ve said before, a big part of this problem lies with the DNS system:

Domain names aren’t bought, they are rented. Nobody owns domain names, except ICANN.

I’m not saying that we should ditch domain names. But there’s something fundamentally flawed about a system that thinks about domain names in time periods as short as a year or two.

Then there’s the fact that so much of our data is entrusted to third-party sites. There’s no guarantee that those third-party sites give a rat’s ass about the long-term future of our data. Quite the opposite. The callous destruction of Geocities by Yahoo is a testament to how little our hopes and dreams mean to a company concerned with the bottom line.

We can host our own data but that isn’t quite as easy as it should be. And even with the best of intentions, it’s possible to have the canonical copies wiped from the web by accident. I’m very happy to see services like Vaultpress come on the scene:

Your WordPress site or blog is your connection to the world. But hosting issues, server errors, and hackers can wipe out in seconds what took years to build. VaultPress is here to protect what’s most important to you.

The Internet Archive is also doing a great job but Brewster Kahle shouldn’t have to shoulder the entire burden. Dave Winer has written about the idea of future-safe archives:

We need one or more institutions that can manage electronic trusts over very long periods of time.

The institutions need to be long-lived and have the technical know-how to manage static archives. The organizations should need the service themselves, so they would be likely to advance the art over time. And the cost should be minimized, so that the most people could do it.

The Library of Congress has its Digital Preservation effort. Dan Gillmor reports on the recent three-day gathering of the institution’s partners:

It’s what my technology friends call a non-trivial task, for all kinds of technical, social and legal reasons. But it’s about as important for our future as anything I can imagine. We are creating vast amounts of information, and a lot of it is not just worth preserving but downright essential to save.

There’s an even longer-term problem with digital preservation. The very formats that we use to store our most treasured memories can become obsolete over time. This goes to the very heart of why standards such as HTML—the format I’m betting on—are so important.

Mark Pilgrim wrote about the problem of format obsolescence back in 2006. I found his experiences echoed more recently by Paul Glister, author of the superb Centauri Dreams, one of my favourite websites. He usually concerns himself with challenges on an even longer timescale, like the construction of a feasible means of interstellar travel but he gives a welcome long zoom perspective on digital preservation in Burying the Digital Genome, pointing to a project called PLANETS: Preservation and Long-term Access Through Networked Services.

Their plan involves the storage, not just of data, but of data formats such as JPEG and PDF: the equivalent of a Rosetta stone for our current age. A box containing format-decoding documentation has been buried in a bunker under the Swiss Alps. That’s a good start.

David Eagleman recently gave a talk for The Long Now Foundation entitled Six Easy Steps to Avert the Collapse of Civilization. Step two is Don’t lose things:

As proved by the destruction of the Alexandria Library and of the literature of Mayans and Minoans, “knowledge is hard won but easily lost.”

Long Now: Six Easy Steps to Avert the Collapse of Civilization on Huffduffer

I’m worried that we’re spending less and less time thinking about the long-term future of our data, our culture, and ultimately, our civilisation. Currently we are preoccupied with the real-time web: Twitter, Foursquare, Facebook …all services concerned with what’s happening right here, right now. The Long Now Foundation and Tau Zero Foundation offer a much-needed sense of perspective.

As with that other great challenge of our time—the alteration of our biosphere through climate change—the first step to confronting the destruction of our collective digital knowledge must be to think in terms greater than the local and the present.

Debatable act

I took part at an event held last week in the same building as the Clearleft office. It was called Debating the Digital Economy Act …except it wasn’t really a debate. Everyone was in agreement that the legislation is dreadful and that the way it has been rushed through parliament was a travesty.

Nonetheless, it was still a valuable gathering. Instead of debating on the pros and cons of something that has no redeeming qualities, we tried to tackle the issue of what we can do about it.

One of the best points of the night came from Pete who pointed out how important it was that we de-geekify the discussion lest we get brushed aside as pitchfork-wielding digerati.

The event began with opening remarks from each of the white, middle-aged panelists. My own remarks were definitely on the geeky side: a long-zoom perspective entitled Fear is the Mind-killer. You can read it now or add it to Instapaper to read later.

I made an audio recording of my opening remarks. You can listen to it now or you can huffduff it to listen to it later.

Fear Is the Mind-killer by Jeremy Keith on Huffduffer

It’s licensed under a Creative Commons attribution license. Do with it as you wish.

Beautiful truth

I’ve tried to articulate my feelings about data preservation, digital decay and the loss of our collective culture down the memory hole. I’ve written about Tears in the Rain, Magnoliloss and Linkrot. I’ve spoken about Open Data, The Long Web and All Our Yesterdays.

But all of my words are naught compared to a single piece of writing by Joel Johnson on Gizmodo. It’s called Raiding Eternity. From the memories stored on Flickr, past the seed bank of Svalbard, out to the Voyager golden record, it sweeps and soars in scope …but always with a single moment at its center, a single life, a single death.

Please read it. It is beautiful and it is truthful.

When old age shall this generation waste,
Thou shalt remain, in midst of other woe
Than ours, a friend to man, to whom thou say’st,
Beauty is truth, truth beauty,—that is all
Ye know on earth, and all ye need to know.

John Keats


The past

These talking machines are going to ruin the artistic development of music in this country. When I was a boy…in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.

John Philip Sousa

The present

Slicing the profit pie

Mark Thomas talks about the Digital economy Bill

The future

The International Convention on Performing Rights is holding a third round of crisis talks in an attempt to stave off the final collapse of the WIPO music licensing regime. On the one hand, hard-liners representing the Copyright Control Association of America are pressing for restrictions on duplicating the altered emotional states associated with specific media performances: As a demonstration that they mean business, two “software engineers” in California have been kneecapped, tarred, feathered, and left for dead under placards accusing them of reverse-engineering movie plot lines using avatars of dead and out-of-copyright stars.

On the opposite side of the fence, the Association of Free Artists are demanding the right of perform music in public without a recording contract, and are denouncing the CCAA as being a tool of Mafiya apparachiks who have bought it from the moribund music industry in an attempt to go legit. FBI Director Leonid Kuibyshev responds by denying that the Mafiya is a significant presence in the United States. But the music biz’s position isn’t strengthened by the near collapse of the legitimate American entertainment industry, which has been accelerating ever since the nasty noughties.

Accelerando by Charles Stross

Tears in the rain

When I first heard that Yahoo were planning to bulldoze Geocities, I was livid. After I blogged in anger, I was taken to task for jumping the gun. Give ‘em a chance, I was told. They may yet do something to save all that history.

They did fuck all. They told Archive.org what URLs to spider and left it up to them to do the best they could with preserving internet history. Meanwhile, Jason Scott continued his crusade to save as much as he could:

This is fifteen years and decades of man-hours of work that you’re destroying, blowing away because it looks better on the bottom line.

We are losing a piece of internet history. We are losing the destinations of millions of inbound links. But most importantly we are losing people’s dreams and memories.

Geocities dies today. This is a bad day for the internet. This is a bad day for our collective culture. In my opinion, this is also a bad day for Yahoo. I, for one, will find it a lot harder to trust a company that finds this to be acceptable behaviour …despite the very cool and powerful APIs produced by the very smart and passionate developers within the same company.

I hope that my friends who work at Yahoo understand that when I pour vitriol upon their company, I am not aiming at them. Yahoo has no shortage of clever people. But clearly they are down in the trenches doing development, not in the upper echelons making the decision to butcher Geocities. It’s those people, the decision makers, that I refer to as twunts. Fuckwits. Cockbadgers. Pisstards.

The Death and Life of Geocities

They’re trying to keep it quiet but Yahoo are planning to destroy their Geocities property. All those URLs, all that content, all those memories will be lost …like tears in the rain.

Jason Scott is mobilising but he needs help:

I can’t do this alone. I’m going to be pulling data from these twitching, blood-in-mouth websites for weeks, in the background. I could use help, even if we end up being redundant. More is better. We’re in #archiveteam on EFnet. Stop by. Bring bandwidth and disks. Help me save Geocities. Not because we love it. We hate it. But if you only save the things you love, your archive is a very poor reflection indeed.

I’m seething with anger. I hope I can tap into that anger to do something productive. This situation cannot stand. It reinforces my previously-stated opinion that Yahoo is behaving like a dribbling moronic company.

You may not care about Geocities. Keep in mind that this is the same company that owns Flickr, Upcoming, Delicious and Fire Eagle. It is no longer clear to me why I should entrust my data to silos owned by a company behaving in such an irresponsible, callous, cold-hearted way.

What would Steven Pemberton do?

Update: As numerous Yahoo employees are pointing out on Twitter, no data has been destroyed yet; no links have rotted. My toys-from-pram-throwage may yet prove to be completely unfounded. Jim invokes , seeing parallels with amazonfail, so overblown is my moral outrage. Fair point. I should give Yahoo time to prove themselves worthy guardians. As a customer of Yahoo’s other services, and as someone who cares about online history, I’ll be watching to see how Yahoo deals with this situation and I hope they deal with it well (archiving data, redirecting links).

Like I said above, I hope I can turn my anger into something productive. Clearly I’m not doing a very good job of that right now.

All Our Yesterdays

I’m back from spending a weekend in Cornwall at the inaugural Bamboo Juice conference, held in the inspiring surroundings of the Eden Project.

I opened up proceedings with a talk entitled All Our Yesterdays. I know it’s the title of a Star Trek episode, but I actually had Shakespeare in mind:

To-morrow, and to-morrow, and to-morrow,
Creeps in this petty pace from day to day,
To the last syllable of recorded time;
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!

Usually my presentations follow a linear narrative but this was a rambling, self-indulgent affair. So I used a non-linear presentation tool this time; the Flash-based Prezi. You can view the presentation at prezi.com/35967.

I can’t really summarise the presentation—you kinda had to be there—but there were two main points:

  1. Think about what you would put on attached to Voyager; now publish that material online.
  2. Use web standards so that we can build a .

Along the way I took in the history of writing from the Rosetta stone to the Gutenberg press via the Book of Kells, potted bios of Leibniz, Babbage and Turing, the alternative hypertext systems of Vannevar Bush and Ted Nelson, and a fairly emotional rant about the ludicrous state of affairs in the world of copyright and so-called intellectual property. There’s a bibliography of further reading tucked into the corner of the presentation:

URLs mentioned during the presentation include:

These are some of the historically important geographical locations I mentioned:

There were three video excerpts in the presentation:

My disjointed ravings on cultural preservation and space exploration would have seemed far-fetched in any other setting but after the talk, when I was wandering through the buildings of the , they seemed positively tame.

If you were at Bamboo Juice, I hope you liked the talk. If you weren’t there, sorry; you missed a beautiful day at the geodesic domes.

Blast from the past

In preparing for my talk for the Bamboo Juice conference at the Eden Project in Cornwall next week, I find I’m doing a lot of WWILFing. After spending far too long reading about and , and editing footage of a Von Braun-inspired orbital habitat, I got completely sidetracked into trying to figure out the storage capacity of attached to Voyager 1.

I still haven’t found an answer—I’ve asked Voyager’s cousin for help—but I did stumble across a gem of a document from 1995. It’s by Simon Pockley and it’s called Lest We Forget or Why I chose the World Wide Web as a repository for archival material. Written in the infancy of the web, it makes for fascinating reading. It’s like a seedling of the semantic web. Some of the projections were way off but some of them were eerily prescient. Here’s my favourite passage:

Technological obsolescence is only a part of the problem in the preservation of digital information. The World Wide Web is a flexible carrier of digital material across both hardware and software. Its ability to disseminate this material globally, combined with its inherent flexibility, allow it to accommodate evolving standards of encoding and markup. Survival of significant material on-line is dependent on use and use is related to ease of access.

The document contains a number of hyperlinks to related material, all of which are collected into footnotes at the end. What’s heartbreaking is to discover how many of those links no longer resolve. Just a handful from the original list remain:

Four fifths of those links resolve to a single domain, that of the National Library of Australia. So much for our distributed repository of archival material.


The newest book from Iain M Banks is called Matter. The middle M in the author’s name is a dead giveaway that this is a science-fiction novel and, as with most of Banks’ sci-fi material, Matter is set in the milieu of the Culture.

The Culture novels aren’t great books. The writing isn’t noteworthy. The plots and subplots tend to be rambling disconnected affairs. But despite all that, I enjoy reading them immensely. That’s because the Culture is such a fascinating place to visit. Life in the Culture is the kind of post-singularity world that Bruce Sterling claims is impossible to write about because no information can be retrieved from beyond the event horizon of a (‘though Cory did a pretty great job of it in Down and Out in the Magic Kingdom).

The enjoyment of the Culture comes from being immersed in this (literally) alien society, catching glimpses of its inner workings. If glimpses aren’t enough, then I highly recommend reading this newsgroup posting from 1994 which reads like a digital for Banks’ imagined world:

The Culture, in its history and its on-going form, is an expression of the idea that the nature of space itself determines the type of civilisations which will thrive there.

The thought processes of a tribe, a clan, a country or a nation-state are essentially two-dimensional, and the nature of their power depends on the same flatness. Territory is all-important; resources, living-space, lines of communication; all are determined by the nature of the plane (that the plane is in fact a sphere is irrelevant here); that surface, and the fact the species concerned are bound to it during their evolution, determines the mind-set of a ground-living species. The mind-set of an aquatic or avian species is, of course, rather different.

Essentially, the contention is that our currently dominant power systems cannot long survive in space; beyond a certain technological level a degree of anarchy is arguably inevitable and anyway preferable.

There’s more of this kind of stuff and it’s all pretty fascinating: sex, law and politics all get covered. But it’s the socioeconomic situation that I find most interesting, rooted as it is in a belief of Banks’ that coincides with my own. Stick this in your Libertarian pipe and smoke it:

Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive — and more morally desirable — than one left to market forces.

The market is a good example of evolution in action; the try-everything-and-see-what- -works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is - without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset - intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.

It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.

That probably makes both myself and Banks pinko commies but I’d rather see a future society like the Culture than one based on aggressive .

My fellow Brightonians can see Iain M Banks reading at The Old Market on February 25th. I won’t be able to make it but it promises to be an entertaining discussion of an anarcho-utopian science-fiction society.

Common people

George just announced a wonderful new initiative. It’s a collaboration between Flickr and the Library of Congress called simply The Commons.

The library has a lot of wonderful historic images. Flickr has a lot of wonderful people who enjoy tagging pictures. Put the two together and let’s see what happens.

I think this is a great idea. They get access to the collective intelligence of our parallel-processing distributed mechanical Turk. We get access to wonderful collections of old pictures. And when I say access, I don’t just mean that we get to look at them. These pictures have an interesting new license: no known copyright restrictions. This covers the situation for photos that once had copyright that wasn’t renewed.

The naysayers might not approve of putting metadata in the hands of the masses but I think it will work out very well indeed. Sure, there might be some superfluous tags but they will be vastly outweighed by the valuable additions. The proportion will be at least which, let’s face it, is a lot better than 0/0. That’s something I’ve learned personally from opening up my own photos to be tagged by anyone: any inconvenience with deleting “bad” tags is massively outweighed by the benefits of all the valuable tags that my pictures have accrued. If you haven’t yet opened up your photos to tagging by any Flickr user, I strongly suggest you do so.

Now set aside some time to browse the cornucopia of . And if at any stage you feel compelled to annotate a picture with some appropriate tags, go for it.

I really hope that other institutions will see the value in this project. This could be just the start of a whole new chapter in collaborative culture.

Mixed signals

When I attended Reboot 8 earlier this year, it was my first time visiting Denmark. From the moment I left the airport in Copenhagen, I was struck by how smoothly everything seemed to work.

On the train journey into town, Tom and I found all sorts of nice usability features in our carriage. You can tell a lot about a country from its public transport system and, based on my experiences, Denmark was like a country that had been designed by Apple.

One week previously, I had been in Manchester delivering an Ajax workshop. There I saw a shockingly badly designed object.

Red pedestrian signal

I had heard about these new pedestrian signals but nothing could have prepared me for how awful they are.

Most pedestrian signals around the world work much the same way. The signal is positioned across the road from the user above head height. The control for the signal is on the same side of the road as the user. The exact design of the signal and the control can vary enormously from place to place but the basic principle is the same.

When the signal changes (red to green, “don’t walk” to “walk”, etc.), the pedestrian moves towards the signal. Because the signal is placed in the location that the user is trying to reach, it serves a dual purpose. It acts as an indicator of safety and as a goal.

The pedestrian signals I saw in Manchester are placed at waist height. As soon as two or more people are waiting to cross the road, the signal is blocked.

Green pedestrian signal

Worst of all, the signal and the control share the same space. Once the pedestrian begins walking, there is no safety indicator. When you’re halfway across the road, you have no idea whether or not it is safe.

Oh, and there’s no audio signal either. That’s a feature built in to most of the older pedestrian signals in England that has been removed from these newer models. If you’re visually impaired, you are well and truly screwed. Even if you’re not, you’re missing a valuable safety cue. As is so often the case, accessibility features end up benefiting everyone.

I cannot understand how these pedestrian signals made it off the drawing board, much less on to the streets of Manchester and other towns in the UK. It’s not just bad design, it’s dangerous design.

Richard once told me about a risk assessment from his previous incarnation as an engineer. He had to determine whether workers on a pipeline above the arctic circle would be safe from polar bear attacks. The results showed that there was a chance that 1.5 people could be killed every thousand years. That was deemed unsafe. Human life is valuable.

These pedestrian signals have clearly not been assessed for risks or tested for usability.

Let’s be clear about this. These signals are new. They are inferior to the old signals. It costs money to remove the old pedestrian signals and replace them with the newer, more craptactular ones.

It beggars belief.

Kathy Sierra wrote recently about differences between US and European design. This is something I’ve written about before. I don’t necessarily belief that design is better or worse on either continent, just that cultural differences underpin what is considered good design. It’s clear to me now that the design differences within Europe itself might be wider than the Atlantic ocean.

The attitude towards design in the UK seems to reflect the attitude towards life; a grumbling acceptance that putting up with inconvenience is all part of the human condition. Perhaps secretly it’s the grumbling that we enjoy. The weather may be beyond human control, but the queuing, the public transport and the quality of beer aren’t.

The Design of Everyday Things by Donald A. Norman would be a much shorter book had he never lived in England. Almost all of the examples of bad design are drawn from everyday life in this country, including the infamous slam-door trains.

Terry Gilliam’s Brazil is a wonderful dystopian vision extrapolated from the England of today. As well as the usual repressive regime of all Orwellian futures, it depicts a life filled with beaureacracy, inconvenience and unusable design.

Ray Bradbury once said of science-fiction:

We do this not to predict the future but to prevent it.

I want to find out who is responsible for designing the new pedestrian signals, who is responsible for — forgive the pun — giving them the green light, and who is responsible for deciding where they are implemented. I don’t want to see these things on the streets of Brighton.