Tags: open



Ice cold in Copenhagen

I went to Copenhagen last week for the Coldfront conference. It was lovely to be back in Denmark’s capital. I used to go over there ever year when the Reboot conference was running, but that wrapped up a few year’s back so it’s been quite a while since I had the opportunity to savour Copenhagen’s architecture, culture, coffee, food, and beer.

Coldfront was fun. Kenneth has modelled the format of the event on Remy’s Full Frontal conference—one day of a single track of front-end dev talks in a comfy cinema.

Going to a focused conference like this is a great way of getting a short sharp shock of what’s hot—like a State of the Union address for the web. At Coldfront there were some very clear themes around building for resilience, and specifically routing around the damage of inconsistent connectivity. There was a very clear message—from Paul, Alex, and Patrick (blog imminent)—that the network is not always on our side. Making our sites work offline should be much more of a priority than it currently is.

On a related note, the technology that was mentioned the most was Service Workers …and Jake wasn’t even there! Heck, even I mentioned it in glowing terms in my own little presentation. I was admiring the way it has been designed specifically to be used in a progressive enhancement kind of way.

So if I were Mr. McGuire in The Graduate, my line to a web developer equivalent of Dustin Hoffman would be “I want to say one word to you, just one word. Are you listening? …Service Workers.”

100 words 076

The Open Market is situated just off London Road in Brighton. For the longest time, it—along with London Road itself—was the kind of place you really didn’t want to venture into. It was sort of seedy, and not a little bit off-putting.

But in the past few years The Open Market has had a complete overhaul. Now it’s downright pleasant. There are funky stalls, a great butcher shop, a good fishmonger, multiple fruit’n’veg places, and a truly excellent Greek café.

The atmosphere on the weekends is particularly convivial. If this is gentrification, then bring it on I say.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

100 words 034

It was a busy week with lots of commuting up and down to London, so I’ve been looking forward to a weekend of unwinding.

Jessica and I like to spend our Saturday afternoons doing our shopping for the weekend, planning out some nice leisurely meals. Today we went down to the Open Market, a recently-renovated collection of stalls under one roof selling local produce and goods. The market is also home to an excellent Greek café, where we had lunch.

I guess it’s part of the gentrification of the London Road area. If this is gentrification, bring it on.


Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?


A funny thing happened when I was in Berlin two weekends ago. I was walking down the street that my AirBnB apartment was on when I heard someone say “Jeremy Keith?” It turned out it was Andre Jay Meissner, one of the founders of the excellent Open Device Lab website. We had emailed but never met before. Small world!

The Twitter account for the open device lab in Nuremburg pointed out that it’s been one year since I wrote a blog post about the open device lab I set up:

Much as I’d love to take credit for the idea of an open device lab, it simply isn’t true. Jason and Lyza had been working on setting up the open device lab in Portland for quite a while when I flung open the doors of the Clearleft test lab. But I will take credit for the “Ah, fuck it!” attitude that I introduced to the idea of sharing test devices with the community. Partly because I had seen how long it was taking the Portland device lab to get off the ground while they did everything by the book, I decided to just wait for the worst to happen instead of planning for it:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

It proved to be a great policy. So far, nothing has gone wrong. And it also served as an example to other people thinking about opening up device labs at their companies: “don’t sweat it; I didn’t!”

But as far as anniversaries go, the one-year birthday of the Clearleft device lab is not the most significant event of April 30th. Today is the twentieth anniversary of the publication of one of the most important documents in technological history: the document that officially put the World Wide Web into the public domain.

Open device labs are a small, small part of working on the web but I like to think that they represent the same kind of spirit of openness and sharing that Tim Berners-Lee and his colleagues demonstrated at CERN:

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

At UX London I had dinner with a Swiss entrepreneur who was showing off his proprietary native app on his phone and proudly declaring that he had been granted a patent. He seemed like a nice chap, but his attitude kind of made my skin crawl. It seemed so antithetical to the spirit of sharing and openness that I’m used to from the web.

James Gleick once described the web as the patent that never was:

Tim Berners-Lee invented the Web and the Web browser — that is, the world as we now know it — pretty much single-handedly, starting in about 1989, when he was working as a software engineer at CERN, the particle-physics laboratory in Geneva. He didn’t patent it, or any part of it. On the contrary, he has labored tirelessly to keep cyberspace open and nonproprietary.

We are all reaping the benefits of Sir Tim’s kindness and generosity.

Command lines

Here’s a nifty piece of in-browser behaviour…

Fire up Chrome and in the address field type “huffduffer.com” followed by a space and BOOM! …the address field transforms into a site-specific search field: “Search Huffduffer.”

That’s thanks to this XML file that’s been on Huffduffer since day one. It’s the most straightforward example of the OpenSearch format.

It’s the same format that powers Firefox’s search providers. If you visit Huffduffer with Firefox and click in the browser’s search field, you’ll see the option to “Add Huffduffer” to the list of search engines.

I can’t imagine many people will actually do that but still, no harm in providing the option.

Another option that’s been added to Huffduffer recently (thanks to Andy’s hacking) is the ability to access the API through YQL. Feel free to open up the console and have a play around.

Scandinavian sojourn

I’ve been on a little trip to Copenhagen. Usually when I go to Denmark, it’s for Reboot but alas, there is no Reboot this year. Instead, I was there for Drupalcon.

I have to admit, it was quite a surprise to be asked to speak at a Drupal event. After all, I don’t use the Drupal framework. To be fair, I don’t use any framework—though I did dabble with Django once. Clearleft is a backend-agnostic company: we do UX, IA, front-end, but we’ve deliberately avoided committing to one particular server-side solution.

Anyway, I was kinda nervous about addressing a large group of programmers devoted to a PHP framework that I’m not that familiar with. I needn’t have worried. Everyone was incredibly welcoming and I got a very warm reception.

I had been asked along to speak about HTML5 but rather than just run through a whole bunch of features in the spec, I thought it would be more interesting to talk about why features have been added to HTML5. So I concentrated on the design principles driving the development of the specification.

I’m pretty pleased with how it turned out. The whole thing was streamed live and it’s all been recorded and posted online.

The Drupal community is clearly very vibrant: the 1000+ people gathered in Copenhagen were very enthusiastic about their chosen platform. That said, I did sense some frustration from the theming community—it isn’t always the easiest to change the markup and CSS that’s output by Drupal. This is something that Dries acknowledged in his keynote and people like Jen Simmons are fighting the good fight to improve Drupal’s front-end output.

The Drupal community also know how to party. This was the first conference I’ve been to that had its own beer; the rather excellent Awesomesauce from the world-renowned Mikkeller.

All in all, it was a thoroughly enjoyable experience. And I had enough time before my flight back to Blighty to nip across to Malmö in Sweden, where Emil showed me the sights.

Then it was time to catch a ludicrously comfy train across a remarkable bridge, past a stunning wind farm to the snazilly-designed airport to catch my flight home.

97 Bottles

Almost two years ago, I was having a drink with Keith in The Alembic Bar on my last night in San Francisco:

Keith buys me a beer from the local microbrewery. I opt for an amber ale while he plumps for an IPA. After the first sips, we compare tasting notes and once again speculate about a beer-oriented version of Cork’d.

Well, like Tom says, everyone’s got ideas. It’s following through that counts. So that’s exactly what Blue Flavor have done. They built 97 Bottles:

97 bottles is a totally free, new service that lets you review, recommend, and learn about all kinds of beers. Use it to keep track of your favorite brews, find drinking buddies, and more.

It’s currently in private beta or, as they put in their nicely-crafted copy, in the cask fermenting. But here’s the twist: if you have an OpenID you can sign in with that without waiting to be added to the list of beta testers. That’s smart. As Jeff put it:

It’s not a bug, it’s a feature. It’s OpenID evangelism!

So head on over and sign in with your OpenID. I’ve been kicking the tyres, adding some new beers to the site and reviewing existing ones. It’s a well-crafted social network: fun and useful.

All it needs is a little sprinkling of microformats.

Open Tech 2008

Open Tech was fun. It was like a more structured version of BarCamp: the schedule was planned in advance and there was a nominal entrance fee of £5 but apart from that, it was pretty much OpenCamp. Most of the talks were twenty minutes long, grouped into hour-long thematically linked trilogies.

Things kicked off with a three way attack by Kim Plowright, Simon Wardley and Matt Webb. I particularly enjoyed Matt’s stroll down the memory lane of the birth of cybernetics. Alas, the fact that I stayed to enjoy this history lesson meant that I missed David Hayes’s introduction to Edenbee. But I did stick around for the next set of environment-related talks including a demo of the Wattson from DIY Kyoto and the always-excellent Gavin Starks of AMEE fame.

After a pub lunch spent being entertained by Ewan Spence’s thoroughly researched plan for a muppet remake of Star Wars, I made it back in time for a well-connected burst of talks from Simon, Gavin and Paul. Simon pimped OpenID. Gavin delivered a healthy dose of perspective from the h’internet. Paul ranted about the technologies depicted in his wonderful illustration entitled The Web is Agreement.

I made sure to catch the state of the nation address from Open Street Map. It was, as expected, inspiring. It’s quite amazing how far the project has come since the last Open Tech in 2005. Hearing about the wealth of data available gave me the kick up the arse to update the dConstruct location page to use Open Street Map tiles. It turned out to be a simple process involving the addition of just a few more lines of JavaScript.

My talk at Open Tech was a reprise of my XTech presentation, Creating Portable Social Networks With Microformats although the title on the schedule was Publishing With Microformats. I figured that the Open Tech audience would be fairly advanced so I decided against my original plan of doing an introductory level talk. The social network portability angle also tied in with quite a few other talks on the day.

I shared my slot with Jeni Tennison who gave a hands-on look at RDFa at the London Gazette. The two talks complemented each other well… just like microformats and RDFa. As Jeni said, microformats are great for doing the easy stuff—the low-hanging fruit—and deliberately avoid more complex data structures: they hit 80% of the use cases with 20% of the effort. RDFa, on the other hand, can handle greater complexity but with a higher learning curve. RDFa covers the other 20% of use cases but with 80% effort. Jeni’s case study was the perfect example. Whereas as I had been showing the simple patterns of user profiles and relationships on social networks (easily encoded with and ), she was dealing with a very specific data set that required its own ontology.

I was chatting with Dan at the start of Open Tech about this relationship. We’re both pretty fed up with the technologies being set up as somehow being rivals. Personally, I’m very happy that RDFa covers the kind of data structures that microformats doesn’t touch. When someone comes to the microformats community with an idea for a complex data format, it’s handy to have another technology to point them to. If you’re dealing with simple, common structures that have aggregate benefit like contact details, events and reviews, microformats are the perfect fit. But if you’re dealing with more complex structures—and I’m thinking here about museum collections, libraries and laboratories—chances are that some flavour of is going to be more suitable.

Jeni and I briefly discussed whether we should set up our talks as a kind of mock battle. But that kind of rivalry, even when it’s done in a jokey fashion, is unnecessary and frankly, more than a little bit dispiriting. It’s more constructive to talk about real-world use cases. On that basis, I think our Open Tech presentations hit the right note.

Open Tech schedule

I’ll be heading up to the University of London tomorrow for Open Tech 2008. The last Open Tech was in 2005 which was, by all accounts, a legendary affair—it led directly to the creation of the ORG.

I’ll be speaking about microformats, probably reworking some of the things I was talking about at XTech. It looks like there’ll be quite a lot of discussion around social networks, portability and privacy so I’m going to concentrate on XFN and hCard. Speaking of which, be sure to read Ben’s excellent article on Digital Web and then check out David’s superb implementation of the Social Graph API: what a productive pair of flatmates!

I put together an hCalendar schedule for Open Tech so if you’re going along, you might want to subscribe. I recommend subscribing over downloading as the schedule is likely to change. I’ll do my best to update the hCalendar document accordingly. Depending on the WiFi situation and how knackered I am after the early start from Brighton, I may try to do some liveblogging.

The Copenhagen Report

Reboot 10 was everything I thought it would be: chaotic, stimulating, frustrating and fun. It’s an odd conference, pitched somewhere between TED and a BarCamp, carried off with a distinctly European flair.

The speakers delivered the goods on a wide range of subject matter. Howard Rheingold was as thought-provoking and interesting as you would expect, Jyri shared his thoughts on social interaction online and I thoroughly enjoyed listening to David Weinberger riffing on and . With at least three tracks of simultaneous talks at any one time, and with plenty of catching up to do in the corridor, I didn’t get to see all the talks but a superb round of micro-presentations gave me the opportunity to get the quick versions of talks I missed.

My presentation seemed to go down fairly well although I thought I was just rambling on. Maybe the fact that I was accompanying myself on mandolin meant that the audience was more forgiving. I didn’t really have slides, just a few hyperlinked documents to tie the narrative together.

The theme of this year’s Reboot was Free. Fittingly, my presentation resulted in my receiving two free gifts. Michael Rose, a local piano player, gave me a CD on which he accompanies a series of Irish tunes. Nikolai—who was introducing the speakers and taking care of the sound in the room where I was presenting—was reminded by my mention of Lawrence Lessig that he had boxes full of The Future of Ideas that were originally destined for the Danish parliament. They were distributed amongst the attendees of Reboot instead.



I’m off to Copenhagen for Reboot 10. Reboot is always a fun gathering. It might not be the most useful event but as part of a balanced conference diet, it’s got a unique European flavour.

As usual, I’m going to use the opportunity to talk about something a bit different to my usual web development spiels. This time I’ll be talking about The Transmission of Tradition, a subject I’ve already road-tested at BarCamp London 3:

This talk will look at the past, present and future of transmitting traditional Irish music from the dance to the digital, punctuated with some examples of the tunes. This will serve as a starting point for a discussion of ideas such as the public domain, copyright and the emergence of a reputation economy on the Web.

At the very least, it will give me a chance to debut that mandolin I picked up in Nashville.

This will be my third Reboot. My previous talks were:

I recently discovered the video of that last presentation. Jessica was kind enough to transcribe the whole thing. She also transcribed my talk from this year’s XTech. Go ahead and read through them if you have the time.

If you don’t have the time, you can always mark them for later reading using Instapaper. I love that app. It does one simple little thing but does it really well. Hit a bookmarklet labelled “read later” and you’re done.

Here’s a little sampling of documents I’ve marked for later reading:

Maybe I should fire them up in multiple tabs and read them on the flight to Denmark. Or I could spend the time brushing up on my Danish.

If you’re headed to Reboot, I’ll see you there. Otherwise …Farvel!

City Hopping

Now that I’m done travelling for pleasure, it’s time for me to travel for business again. I’m heading out to San Francisco for the Supernova conference. Tantek has roped me into moderating a panel called Bottom-Up Distributed Openness.

I’ll be showing up in SF next Friday. Until then, I’ll be in Nashville for the somewhat embarrassingly-titled Voices That Matter conference where I’ll be delivering a half-day workshop on Ajax and a presentation on microformats.

While I’m in the heartland, I’m planning to treat myself to a new mandolin. Then I can bring that mandolin with me when I go to Copenhagen at the end of the month for Reboot 10 where, if my proposal is accepted, I’ll be talking on The Transmission of Tradtion. The video of my talk from last year, the pretentiously-titled Soul is available for your viewing pleasure. I’ll see about getting it transcribed and added to the articles section here.

All that’s ahead of me. Right now I need to prepare myself for the long and tedious trip across the Atlantic. See you in Nashville, San Francisco or Copenhagen.

App Engines of Creation

At last night’s £5 App gathering, after Glenn entertained us with the story of setting up Madgex, Paul and Simon unveiled a little thing they’ve been working on called WalRSS. In a nutshell, you point it at a URL and it makes a nice iPhone/iPod Touch version by styling the associated RSS feed.

Simon talked about all the headaches involved in such a seemingly simple concept. All the problems boiled down to the fact that the app needs to consume/parse/scrape third-party content. It turns out that consuming/parsing/scraping HTML and RSS is an order of magnitude hairier and scarier than pointing at a nice shiny RESTful API. In a textbook example of , Simon needed to be ultra-paranoid about malicious users potentially taking down his server while being overly-generous in the kind of malformed, invalid RSS/Atom he accepted because, as it turns out, a helluva lot of feeds out there are bozo-compliant. With all sorts of clever server-side solutions at his disposal to handle polling, load balancing, caching and message queueing, he quickly came to realise that he was becoming more of a sysadmin than a web developer.

Ironically, just the day before the £5 App meetup, Google announced their App Engine. WalRSS is almost exactly the kind of app that the App Engine is designed for: it’s written in Django and it needs to do the kind of processing that Google’s infrastructure was made to handle.

I for one welcome our new App Engine overlords. I quite like to dabble in the occasional bit of backend coding but I have no desire to delve into the domain of systems administration.

There’s already an OpenID provider built on Google App Engine. This means that anybody with a Google account potentially has an OpenID URL. You’d have to log in through the app first but from then on, you could use http://openid-provider.appspot.com/[your username] as your login.

Right now I’m using http://adactio.com/ as my OpenID URL, delegating to http://adactio.myopenid.com/:

<link rel="openid.delegate" href="http://adactio.myopenid.com/" />

Should I ever tire of MyOpenID, I guess I potentially update my delegate link to use Google:

<link rel="openid.delegate" href="http://openid-provider.appspot.com/adactio" />

I’d still need to update my openid.server link though.

Oh, you think this is geeky stuff? You should have heard Simon last night. Actually, you could have if you tuned into Ribot’s live broadcast on Qik.

Brighton, mapped

Today I travelled from home to work, from work to band practice, from band practice to an educational celebration: OpenStreetMap Brighton 1.0.

Ever since the mapping workshop after dConstruct 2006, Mikel and others have been out and about improving the mapping data for Brighton from the ground up. While a map can never be truly finished—it is, after all, a representation of a changing, evolving place—the data is now remarkably complete.

There’s a natural tendency for us to think in our own domains of experience so I usually only see the potential for OpenStreetMap data in web applications and mashups. But the launch event showed some wonderful use-cases in the real world: local councils, public transport… these are organisations that would otherwise have to pay very large sums (of taxpayer’s money) to the Ordnance Survey just to display a map.

OpenStreetMap is one of those applications of technology, like Wikipedia or BarCamp, that fills me with hope. On paper, the concepts sound crazy. In reality, they don’t just compete with commercial services, they surpass them.

I really need to get myself a GPS device.


The nerdier nether-regions of blogland have been burning through the night with the news of the OpenSocial initiative spearheaded by Google and supported by what Chris so aptly calls the coalition of the willing.

Like Simon, I’ve been trying to get my head around exactly what OpenSocial is all about ever since reading Brady Forrest’s announcement. Here’s what I think is going on:

Facebook has an API that allows third parties to put applications on Facebook profile pages (substitute the word “widget” for “application” for a more accurate picture). Developers have embraced Facebook applications because, well, Facebook is so damn big. But developing an app/widget for Facebook is time-consuming enough that the prospect of rewriting the same app for a dozen other social networking sites is an unappealing prospect. That’s where OpenSocial comes in. It’s a set of conventions. If you develop to these conventions, your app can live on any of the social networking sites that support OpenSocial: LinkedIn, MySpace, Plaxo and many more.

Some of the best explanations of OpenSocial are somewhat biased, coming as they do from the people who are supporting this initiative, but they are still well worth reading:

There’s no doubt that this set of conventions built upon open standards—HTML and JavaScript—is very good for developers. They no longer have to choose what “platforms” they want to support when they’re building widgets.

That’s all well and good but frankly, I’m not very interested in making widgets, apps or whatever you want to call them. I’m interested in portable social networks.

At first glance, it looks like OpenSocial might provide a way of exporting social network relationships. From the documentation:

The People and Friends data API allows client applications to view and update People Profiles and Friend relationships using AtomPub GData APIs with a Google data schema. Your client application can request a list of a user’s Friends and query the content in an existing Profile.

But it looks like these API calls are intended for applications sitting on the host platform rather than separate sites hoping to extract contact information. As David Emery points out, this is a missed opportunity:

The problem is, however, that OpenSocial is coming at completely the wrong end of the closed-social-network problem. By far and away the biggest problem in social networking is fatigue, that to join yet another site you have to sign-up again, fill in all your likes and dislikes again and—most importantly—find all your friends again. OpenSocial doesn’t solve this, but if it had it could be truly revolutionary; if Google had gone after opening up the social graph (a term I’m not a fan of, but it seems to have stuck) then Facebook would have become much more of an irrelevance—people could go to whatever site they wanted to use, and still preserve all the interactions with their friends (the bit that really matters).

While OpenSocial is, like OAuth, a technology for developers rather than end users, it does foster a healthy atmosphere of openness that encourages social network portability. Tantek has put together a handy little table to explain how all these technologies fit together:

portabilitytechnologyprimary beneficiary
social applicationOAuth, OpenSocialdevelopers
social profilehCard users
friends listXFN users
loginOpenID users

I was initially excited that OpenSocial might be a magic bullet for portable social networks but after some research, it doesn’t look like that’s the case—it’s all about portable social widgets.

But like I said, I’m not entirely sure that I’ve really got a handle on OpenSocial so I’ll be digging deeper. I was hoping to see Patrick Chanezon talk about it at the Web 2.0 Expo in Berlin next week but, wouldn’t you know it, I’m scheduled to give a talk at exactly the same time. I hope there’ll be plenty of livebloggers taking copious notes.


Tom Morris was in town today today so I invited him to take shelter from the miserable weather ‘round at Clearleft Towers. Which reminds of something Tom showed me at BarCamp Brighton that I’ve been meaning to share for a while now.

Tom has an hCard on his blog. By default the information provided is fairly basic: an email address, a URL and a vague physical address. Right by the hCard, there’s a simple form that allows you to log on using OpenID. If you log on and you’re on a white list of Tom’s friends, the hCard is updated to reveal more information: telephone numbers and a complete physical address.

That’s pretty clever. And when you consider that OpenID is a URL-based authentication system and XFN is also based around URLs, it would be pretty easy to have the white list correspond to an XFN list on the same page as the hCard.

hCard | OpenID | XFN… it’s like Unix pipes for the Web: small pieces, loosely joined.


It sounds like Mashup Camp was a hive of very productive activity. Kevin Lawver gave a presentation on portable social networks but instead of just talking about it, he wrote some Ruby code. Kevin is using OpenID for log in, followed by hCard parsing and XFN spidering (see also: Gavin Bell’s work). Superb stuff!

Meanwhile, Plaxo is now supporting OpenID and microformats thanks to the efforts of Kaliya and Chris.

And just in case you think that this is still a niche geek thing, here are the job details for Program Manager of Internet Explorer over at Microsoft:

Does the idea of redefining the role of the Internet browser appeal to you? Do the terms HTTP, RSS, Microformats, and OpenID, excite you? If so, then this just might be the opportunity for you.