Tags: api



Sonic sparklines

I’ve seen some lovely examples of the Web Audio API recently.

At the Material conference, Halldór Eldjárn demoed his Poco Apollo project. It generates music on the fly in the browser to match a random image from NASA’s Apollo archive on Flickr. Brian Eno, eat your heart out!

At Codebar Brighton a little while back, local developer Luke Twyman demoed some of his audio-visual work, including the gorgeous Solarbeat—an audio orrery.

The latest issue of the Clearleft newsletter has some links on sound design in interfaces:

I saw Ruth give a fantastic talk on the Web Audio API at CSS Day this year. It had just the right mixture of code and inspiration. I decided there and then that I’d have to find some opportunity to play around with web audio.

As ever, my own website is the perfect playground. I added an audio Easter egg to adactio.com a while back, and so far, no one has noticed. That’s good. It’s a very, very silly use of sound.

In her talk, Ruth emphasised that the Web Audio API is basically just about dealing with numbers. Lots of the examples of nice usage are the audio equivalent of data visualisation. Data sonification, if you will.

I’ve got little bits of dataviz on my website: sparklines. Each one is a self-contained SVG file. I added a script element to the SVG with a little bit of JavaScript that converts numbers into sound (I kind of wish that the script were scoped to the containing SVG but that’s not the way JavaScript in SVG works—it’s no different to putting a script element directly in the body). Clicking on the sparkline triggers the sound-playing function.

It sounds terrible. It’s like a theremin with hiccups.

Still, I kind of like it. I mean, I wish it sounded nicer (and I’m open to suggestions on how to achieve that—feel free to fork the code), but there’s something endearing about hearing a month’s worth of activity turned into a wobbling wave of sound. And it’s kind of fun to hear how a particular tag is used more frequently over time.

Anyway, it’s just a silly little thing, but anywhere you spot a sparkline on my site, you can tap it to hear it translated into sound.

Web! What is it good for?

You can listen to an audio version of Web! What is it good for?

I have a blind spot. It’s the web.

I just can’t get excited about the prospect of building something for any particular operating system, be it desktop or mobile. I think about the potential lifespan of what would be built and end up asking myself “why bother?” If something isn’t on the web—and of the web—I find it hard to get excited about it. I’m somewhat jealous of people who can get equally excited about the web, native, hardware, print …in my mind, if it hasn’t got a URL, it’s missing some vital spark.

I know that this is a problem, but I can’t help it. At the very least, I have enough presence of mind to recognise it as being my problem.

Given these unreasonable feelings of attachment towards the web, you might expect me to wish it to become the one technology to rule them all. But I’ve never felt that any such victory condition would make sense. If anything, I’ve always been grateful for alternative avenues of experimentation and expression.

When Flash was a thriving ecosystem for artists to push the boundaries of what was possible to deliver to a web browser, I never felt threatened by it. I never wished for web technologies to emulate those creations. Don’t get me wrong: I’m happy that we’ve got nice smooth animations in CSS, but I never thought the lack of animation was crippling the web’s potential.

Now we have native technologies that can do more than the web can do. iOS and Android apps can access device APIs that web browsers can’t (yet). And, once again, while I look forward to the day that websites will be able to do all the things that native apps can do today, I don’t think that the lack of those capabilities is dooming the web to irrelevance.

There will always be some alternative that is technologically more advanced than the web. First there were CD-ROMs. Then we had Flash. Now we have native apps. Each one of those platforms offered more power and functionality than you could get from a web browser. And yet the web persists. That’s because none of the individual creations made with those technologies could compete with the collective power of all of the web, hyperlinked together. A single native app will “beat” a single website every time …but an app store pales when compared to the incredible reach and scope of the entire World Wide Web.

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

Like I said, I’m okay with that. I’m okay with the web not being as advanced as some other technology stack at any particular moment. I can wait.

In fact, as PPK points out, we could do real damage to the web by attempting to make it mimic some platform that’s currently in the ascendent. I disagree with his framing of it as a battle—rather than conceding defeat, I see it more like waiting out a siege—but I agree completely with this assessment:

The web cannot emulate native perfectly, and it never will.

If we accept that, then we can play to the web’s strengths (while at the same time, playing a slow game of catch-up behind the scenes). The danger comes when we try to emulate the capabilities of something that isn’t the web:

Emulating native leads to bad UX (or, at least, a UX that’s clearly a sub-optimal copy of native UX).

Whenever a website tries to emulate something from an operating system—be it desktop or mobile—the result is invariably something that gets really, really close …but falls just a little bit short. It feels like entering an uncanny valley of interaction design.

Frank sums this up nicely:

I think you make what I call “bicycle bear websites.” Why? Because my response to both is the same.

“Listen bub,” I say, “it is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And look, pal, that bear will never actually be good at riding a bicycle.”

This is how I feel about so many of the fancy websites I see. “It is fascinating that you can do that, but it’s really not what a website is supposed to do.”

Enough is enough, says PPK:

It’s time to recognise that this is the wrong approach. We shouldn’t try to compete with native apps in terms set by the native apps. Instead, we should concentrate on the unique web selling points: its reach, which, more or less by definition, encompasses all native platforms, URLs, which are fantastically useful and don’t work in a native environment, and its hassle-free quality.

This is something that Cennydd talked about recently on an episode of the Design Details podcast. The web, he argues, is great for the sharing of information, but not so great for applications.

I think PPK, Cennydd, and I are all in broad agreement, but we almost certainly differ in the details. PPK, for example, argues that maybe news sites should be native apps instead, but for me, those are exactly the kind of sites that benefit from belonging to no particular platform. And when Cennydd talks about applications on the web, it raises the whole issue of what constitutes a web app anyway. If we’re talking about having access to device APIs—cameras, microphones, accelerometers—then yes, native is the way to go. But if we’re talking about interface elements and motion design, then I think the web can hold its own …sometimes.

Of course not every web browser can match the capabilities of a native app—that’s why it’s so important to approach web development through the lens of progressive enhancement rather than treating it like software development no different than that of native platforms. The web is not a platform—that’s the whole point of the web; it’s cross-platform. As Baldur put it:

Treating the web like another app platform makes sense if app platforms are all you’re used to. But doing so means losing the reach, adaptability, and flexibility that makes the web peerless in both the modern media and software industries.

The price we pay for that incredible cross-platform reach is that features on the web will always be lagging behind, and even when do they do arrive, they won’t be available in all web browsers.

To paraphrase William Gibson: capabilities on the web will always be here, but they will never be evenly distributed.

But let’s take a step back from the surface-level differences between web and native. Just as happened with CD-ROMs and Flash, the web is catching up with native when it comes to motion design, visual feedback, and gestures like swiping and dragging. I don’t think those are where the fundamental differences lie. I don’t even think the fundamental differences lie in accessing device APIs like cameras, microphones, and offline storage—the web is (slowly) catching up in those areas too.

What if the fundamental differences lie deeper than the technical implementation? What if the web is suited to some things more than others, not because of technical limitations, but because of philosophical mismatches?

The web was born at CERN, an amazing environment that’s free of many of the economic and hierarchical pressures that shape technology decisions elsewhere. The web’s heritage as a hypertext document sharing system for pure scientific research is often treated as a handicap, something that must be overcome in this age of applications and monetisation. But I see this heritage as a feature, not a bug. It promotes ideals of universal access above individual convenience, creation above consumption, and sharing above financial gain.

In yet another great article by Baldur, called The new age of HTML: the web is being torn apart, he opens with this:

For web development to grow as a craft and as an industry, we have to follow the money. Without money the craft becomes a hobby and unmaintained software begins to rot.

But I think there’s a danger here. If we allow the web to be led by money-making, we may end up changing the fundamental nature of the web, and not for the better.

Now, personally, I believe that it’s entirely possible to run a profitable business on the web. There are plenty of them out there. But suppose we allow that other avenues are more profitable. Let’s assume that there’s more friction in making money on the web than there is in, say, making money on iOS (or Android, or Facebook, or some other monolithic stack). If that were the case …would that be so bad?

Suppose, to use PPK’s phrase, we “concede defeat” to Apple, Google, Microsoft, and Facebook. When you think about it, it makes sense that platforms borne from profit-driven companies are going to be better at generating profit than something created by a bunch of idealistic scientists trying to improve the knowledge of the human race. Suppose we acknowledged that the web isn’t that well-suited to capitalism.

I think I’d be okay with that.

Would the web become little more than a hobbyist’s playground? A place for amateurs rather than professional businesses?


I’d be okay with that too.

Y’see, what attracted me to the web—to the point where I have this blind spot—wasn’t the opportunity to make money. What attracted me to the web was its remarkable ability to allow anyone to share anything, not just for the here and now, but for the future too.

If you’ve been reading my journal or following my links for any time, you’ll be aware that two of my biggest interests are progressive enhancement and digital preservation. In my mind, these two things are closely intertwingled.

For me, progressive enhancement is a means of practicing universal design, a way of providing access to as many people as possible. That includes access across time, hence the crossover with digital preservation. I’ve noticed again and again that what’s good for accessibility is also good for longevity, and vice versa.

Bret Victor writes:

Whenever the ephemerality of the web is mentioned, two opposing responses tend to surface. Some people see the web as a conversational medium, and consider ephemerality to be a virtue. And some people see the web as a publication medium, and want to build a “permanent web” where nothing can ever disappear.

I don’t want a web where “nothing can ever disappear” but I also don’t want the default lifespan of a resource on the web to be ephemeral. I think that whoever published that resource should get to decide how long or short its lifespan is. The problem, as Maciej points out, is in the mismatch of expectations:

I’ve come to believe that a lot of what’s wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.

I completely agree with Bret’s woeful assessment of the web when it comes to link rot:

It is this common record of public thought — the “great conversation” — whose stability and persistence is crucial, both for us alive today and for those who will come after.

I believe we can and should do better. But I completely and utterly disagree with him when he says:

Photos from your friend’s party are not part of the common record.

Nor are most casual conversations. Nor are search histories, commercial transactions, “friend networks”, or most things that might be labeled “personal data”. These are not deliberate publications like a bound book; they are not intended to be lasting contributions to the public discourse.

We can agree when it comes to search histories and commercial transactions, but it makes no sense to lump those in with the ordinary plenty that I’ve written about before:

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

For me, this lies at the heart of what the web does. The web removes the need for tastemakers who get to decide what gets published. The web removes the need for gatekeepers who get to decide what gets saved.

Other avenues of expressions will always be more powerful than the web in the short term: CD-ROMs, Flash, and now native. But they all come with gatekeepers. The collective output of the human race—from the most important scholarly papers to the most trivial blog post—is too important to put in the hands of the gatekeepers of today who may not even be around tomorrow: Apple, Google, Microsoft, et al.

The web has no gatekeepers. The web has no quality control. The web is a mess. The web is for everyone.

I have a blind spot. It’s the web.

Just what is it that you want to do?

The supersmart Scott Jenson just gave a talk at The Web Is in Cardiff, which was by all accounts, excellent. I wish I could have seen it, but I’m currently chilling out in Florida and I haven’t mastered the art of bilocation.

Last week, Scott wrote a blog post called My Issue with Progressive Enhancement (he wrote it on Google+, which is why you might not have seen it).

In it, he takes to task the idea that—through progressive enhancement—you should be able to offer all functionality to all browsers, thereby foregoing the use of newer technologies that aren’t universally supported.

If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).

The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.

Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.

If you’re building an e-commerce site, then being able to add items to a shopping cart and being able to check out are your core tasks. Once you’ve got that working with good ol’ HTML form elements, then you can go crazy with your enhancements: animating, transitioning, swiping, dragging, dropping …the sky’s the limit.

This is exactly what Orde Saunders describes:

I’m not suggesting that you try and replicate all your JavaScript functionality when it’s disabled, above all that’s just not practical. What you should be aiming for is being able to complete the basics - for example adding a product to a shopping cart and then checking out. This is necessarily going to be clunky as judged by current standards and I suggest you don’t spend much time on optimising this process.

Scott asked about building a camera app with progressive enhancement:

Here again, the real question to ask is “what is the core functionality?” Building a camera app is a means to an end, not the end itself. You need to ask what the end goal is. Perhaps it’s “enable people to share photos with their friends.” Going back to good ol’ HTML, you can accomplish that task with:

<input type="file" accept="image/*">

Now that you’ve got that out of the way, you can spend the majority of your time making the best damn camera app you can, using all the latest browser technologies. (Perhaps WebRTC? Maybe use a canvas element to display the captured image data and apply CSS filters on top?)

Scott says:

My point is that not everything devolves to content. Sometimes the functionality is the point.

I agree wholeheartedly. In fact, I would say that even in the case of “content” sites, functionality is still the point—the functionality would be reading/hearing/accessing content. But I think that Scott is misunderstanding progressive enhancement if he think it means providing all the functionality that one can possibly provide.

Mat recently pointed out that there are plenty of enhancements on the Boston Globe site that require JavaScript, but the core functionality is available to everyone:

Scott again:

What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?

Again, a realtime chat app is a means to an end. What is it enabling? The ability for people to talk to each other over the web? Okay, we can do that using good ol’ HTML—text and form elements—with full page refreshes. That won’t be realtime. That’s okay. The realtime part is an enhancement. Use Web Sockets and WebRTC (in the browsers that support them) to provide the realtime experience. But everyone gets the core functionality.

Like I said, the trick is figuring out what’s core functionality and what’s an enhancement.

Ethan provides another example. Let’s say you’re building a browser-based rich text editor, that uses JavaScript to do all sorts of formatting on the fly. The core functionality is not the formatting on the fly; the core functionality is being able to edit text:

If progressive enhancement truly meant making all functionality available to everyone, then it would be unworkable. I think that’s a common misconception around progressive enhancement; there’s this idea that using progressive enhancement means that you’re going to spend all your time making stuff work in older browsers. In fact, it’s the exact opposite. As long as you spend a little bit of time at the start making sure that the core functionality works with good ol’ fashioned HTML, then you can spend most of your time trying out the latest and greatest browser technologies.

As Orde put it:

What you are going to be spending the majority of your time and effort on is the enhanced JavaScript version as that is how the majority of your customers will be experiencing your site.

The other Scott—Scott Jehl—wrote a while back:

For us, building with Progressive Enhancement moves almost all of our development time and costs to newer browsers, not older ones.

Progressive Enhancement frees us to focus on the costs of building features for modern browsers, without worrying much about leaving anyone out. With a strongly qualified codebase, older browser support comes nearly for free.

Approaching browser support this way requires a different way of thinking. For everything you’re building, you need to ask “is this core functionality, or is it an enhancment?” and build accordingly. It takes a bit of getting used to, but it gets easier the more you do it (until, after a while, it becomes second nature).

But if you’re thinking about progressive enhancement as “devolving” down—as Scott Jenson describes in his post—then I think you’re on the wrong track. Instead it’s about taking care of the core functionality quickly and then spending your time “enhancing” up.

Scott asks:

Shouldn’t we be allowed to experiment? Isn’t it reasonable to build things that push the envelope?

Absolutely! And the best and safest way to do that is to make sure that you’re providing your core functionality for everyone. Once you do that, you can go nuts with the latest and greatest experimental envelope-pushing technologies, secure in the knowledge that you don’t even need to worry about the fact that they don’t work in older browsers. Geolocation! Offline storage! Device APIs! Anything you can think of, you can use as a powerful enhancement on top of your core tasks.

Once you realise this, it’s immensely liberating to use progressive enhancement. You can have the best of both worlds: universal access to core functionality, combined with all the latest cuting-edge technology too.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Twitter permissions

Twitter has come in for a lot of (justifiable) criticism for changes to its API that make it somewhat developer-hostile. But it has to be said that developers don’t always behave responsibly when they’re using the API.

The classic example of this is the granting of permissions. James summed it up nicely: it’s just plain rude to ask for write-access to my Twitter account before I’ve even started to use your service. I could understand it if the service needed to post to my timeline, but most of the time these services claim that they want me to sign up via Twitter so that I can find my friends who are also using the service — that doesn’t require write access. Quite often, these requests to authenticate are accompanied by reassurances like “we’ll never tweet without your permission” …in which case, why ask for write-access in the first place?

To be fair, it used to be a lot harder to separate out read and write permissions for Twitter authentication. But now it’s actually not that bad, although it’s still not as granular as it could be.

One of the services that used to require write-access to my Twitter account was Lanyrd. I gave it permission, but only because I knew the people behind the service (a decision-making process that doesn’t scale very well). I always felt uneasy that Lanyrd had write-access to my timeline. Eventually I decided that I couldn’t in good conscience allow the lovely Lanyrd people to be an exception just because I knew where they lived. Fortunately, they concurred with my unease. They changed their log-in system so that it only requires read-access. If and when they need write-access, that’s the point at which they ask for it:

We now ask for read-only permission the first time you sign in, and only ask to upgrade to write access later on when you do something that needs it; for example following someone on Twitter from the our attendee directory.

Far too many services ask for write-access up front, without providing a justification. When asked for an explanation, I’m sure most of them would say “well, that’s how everyone else does it”, and they would, alas, be correct.

What’s worse is that users grant write-access so freely. I was somewhat shocked by the amount of tech-savvy friends who unwittingly spammed my timeline with automated tweets from a service called Twitter Counter. Their reactions ranged from sheepish to embarrassed to angry.

I urge you to go through your Twitter settings and prune any services that currently have write-access that don’t actually need it. You may be surprised by the sheer volume of apps that can post to Twitter on your behalf. Do you trust them all? Are you certain that they won’t be bought up by a different, less trustworthy company?

If a service asks me to sign up but insists on having write-access to my Twitter account, it feels like being asked out on a date while insisting I sign a pre-nuptial agreement. Not only is somewhat premature, it shows a certain lack of respect.

Not every service behaves so ungallantly. Done Not Done, 1001 Beers, and Mapalong all use Twitter for log-in, but none of them require write-access up-front.

Branch and Medium are typical examples of bad actors in this regard. The core functionality of these sites has nothing to do with posting to Twitter, but both sites want write-access so that they can potentially post to Twitter on my behalf later on. I know that I won’t ever want either service to do that. I can either trust them, or not use the service at all. Signing up without granting write-access to my Twitter account isn’t an option.

I sent some feedback to Branch and part of their response was to say the problem was with the way Twitter lumps permissions together. That used to be true, but Lanyrd’s exemplary use of Twitter for log-in makes that argument somewhat hollow.

In the case of Branch, Medium, and many other services, Twitter authentication is the only way to sign up and start using the service. Using a username and password isn’t an option. On the face of it, requiring Twitter for authentication doesn’t sound all that different to requiring an email address for authentication. But demanding write-access to Twitter is the equivalent of demanding the ability to send emails from your email address.

The way that so many services unnecessarily ask for write-access to Twitter—and the way that so many users unquestioningly grant it—reminds me of the password anti-pattern all over again. Because this rude behaviour is so prevalent, it has now become the norm. If we want this situation to change, we need to demand more respect.

The next time that a service demands unwarranted write-access to your Twitter account, refuse to grant it. Then tell the people behind that service why you’re refusing to sign up.

And please take a moment to go through the services you’ve already authorised.

Command lines

Here’s a nifty piece of in-browser behaviour…

Fire up Chrome and in the address field type “huffduffer.com” followed by a space and BOOM! …the address field transforms into a site-specific search field: “Search Huffduffer.”

That’s thanks to this XML file that’s been on Huffduffer since day one. It’s the most straightforward example of the OpenSearch format.

It’s the same format that powers Firefox’s search providers. If you visit Huffduffer with Firefox and click in the browser’s search field, you’ll see the option to “Add Huffduffer” to the list of search engines.

I can’t imagine many people will actually do that but still, no harm in providing the option.

Another option that’s been added to Huffduffer recently (thanks to Andy’s hacking) is the ability to access the API through YQL. Feel free to open up the console and have a play around.

Get excited and make things with science

There are many reasons to go to South by Southwest Interactive: meeting up with friends old and new being the primary one. Then there’s the motivational factor. I always end up feeling very inspired by what I see.

This year, that feeling of inspiration was front and centre. First off, I tried to impart some of it on the How to Rawk SXSW panel, which was a lot of fun. Mind you, I did throw some shit at the fan by demonstrating how wasteful the overstuffed schwag bags are. I hope I didn’t get MJ into trouble.

My other public appearance was on The Heather Gold Show which was bags of fun. With a theme of Get Excited and Make Things, the topic of inspiration was bandied about a lot. It was a blast. Heather is a superb host and the other guests were truly inspirational. I discovered a kindred spirit in fellow excitable geek, Gina Trapani.

The actual panels and presentations at SXSW are the usual mixture of hit and miss, although the Cooking For Geeks presentation was really terrific. Any presenter who hacks the audience’s taste buds during a presentation is alright with me.

But by far the most inspirational thing I’ve seen was a panel hosted by Tantek on Open Science. The subject matter was utterly compelling and the panelists were ludicrously articulate and knowledgeable:

The URLs were flying thick and fast: the Signtific thought experiment game, the collaborative Galaxy Zoo—now joined by Moon Zoo—and the excellent Spacehack directory.

I was struck by the sheer volume of scientific data and APIs out there now. And yet, we aren’t really making use of it. Why we aren’t we making mashups using Google Mars? Why haven’t I built a Farmville-style game with Google Moon?

Halfway through the panel, I turned to Riccardo and whispered, We should organise a Science Hack Day.

I’m serious. It would probably be somewhere in London. I have no idea where or when. I have no idea how to get a venue or sponsors. But maybe you do.

What do you think? Everyone I’ve mentioned the idea to so far seems pretty excited about the idea. I’ll try to set up a wiki for brainstorming venues, sponsors, APIs, datasets and all that stuff. In the meantime, feel free to leave a comment here.

I got excited. Now I want to make things …with science! Are you with me?


We have some new location-centric toys to play with. Let the hacking commence.

Flickr has released its shapefiles dataset for free (as in beer, as in it would be nice if you mentioned where you got the free beer). These shapefiles are bounding boxes that have been generated by the action of humans correcting suggested place names for geotagged photos. Tom put this data to good use with his neighbourhood boundaries app.

Speaking of excellent location-driven creations by Tom, be sure to check out ; a little OS X app that updates your FireEagle location every five minutes by triangulating your position with Skyhook.

Meanwhile, in another part of Yahoo, has been released in Beta form. It looks very nifty indeed. Pass it some human-readable text and it will try to figure out what physical locations are mentioned in the text. You can help it along by using structured data like the and microformats, but it seems to be pretty good at natural language parsing. Christian has put together some good examples to illustrate his JavaScript Placemaker/YQL mashup.

Slowly but surely we’re heading towards a future where everything is geotagged.

Making contact

Today Yahoo announced the release of their Address Book API — previously only available internally and to selected partners. It follows on from Google’s Contacts Data API and I hope that this is one more nail in the coffin of the password anti-pattern.

Chris has expressed disappointment with the proprietary nature of the response formats and Dave also wishes there were more consistent APIs. It’s a fair point but the situation is still immeasurably better than logging in and scraping the individual address book services.

The sites that have so far abandoned the anti-pattern in favour of best practices are:

Meanwhile a whole bunch of otherwise-great services are still encouraging people to get phished:

The clock is ticking. There really isn’t any excuse any more for asking for my Yahoo Mail or GMail username and password.

Loosely joined

The mighty Zeldman has written a thought-provoking piece called The Vanishing Personal Site which chronicles the changing nature of personal publishing. Where once we had a central URL that defined our online presence, people are increasingly publishing in fragments distributed across services like Twitter, Pownce, Flickr and Magnolia. It was this fragmentation that spurred my first dabblings with APIs to produce Adactio Elsewhere which I did three years ago to the day.

Jeff takes a different approach by incorporating all of those other publishing points directly back into his site rather than a separate aggregation area. This approach seems to be gaining ground.

One of the comments to Jeffrey’s post points to the newly launched website of the architect Denna Jones built in part by Jon Tan who describes the thinking behind it. The site is driven entirely by third-party services like Tumblr, Del.icio.us and Flickr. Jon, by contrast, has his third-party publishing aggregated on a page called Asides, similar to Adactio Elsewhere.

I think most people, even if they are micro-publishing in many places, still have one URL that they consider as their online representation. It might be a blog, it might be a Flickr profile, or for many people, it might be a Facebook account.

It will be interesting to watch these trends develop. Something else I’m going to watch is Jon Tan’s website. It’s dripping with gorgeous typography wrapped in an elastic layout. How is that I haven’t come across this site before? Why wasn’t I informed?


I’ve used del.icio.us for quite a while now. I’m storing 1159 bookmarks, each one of them tagged. It works just fine but it also feels a little, I don’t know …stale. There is supposedly a redesign in the works but I’m not sure that I want to wait around any longer to find out if they’re finally going to put some microformats in the markup.

Instead, I’m moving over to Magnolia. I’ve had a Magnolia account for years but I’ve never really used it. I didn’t see the point while I had a del.icio.us account. But whereas de.icio.us appears stagnated, Magnolia seems to be constantly innovating. Also, it uses microformats. There’s also the fact that I know Larry and I’ve briefly met Todd (lovely gents, both) but I don’t know Joshua Schachter. That shouldn’t matter but it kind of does.

Moving from del.icio.us to Magnolia is very straightforward. But that alone wasn’t going to be enough for me. I’m also accessing my del.icio.us bookmarks through the API. It turns out that Magnolia provides an ingenious way to ease my pain. As well as providing , Magnolia also provides . All I had to do was change some URL endpoints and I had Adactio Elsewhere switched over in no time. Other services take note: providing mirrored versions of your competitors’ APIs eases the pain of migration.

I’ve updated my feedburner RSS feed to point it at my Magnolia links instead of my del.ious.us links. If you were subscribed to my del.icio.us feed separately, you’ll probably want to update your feedreader to point to my Magnolia links instead.

It remains to be seen whether I’ll stay at Magnolia. Even though it is functionally and cosmetically superior to del.icio.us, that might not be enough. After all, Jaiku is superior to Twitter in almost every way—design,markup, reliability—but Twitter still wins. That’s mostly because that’s where all my friends are. Right now my bookmarking friends are split fairly evenly between del.icio.us and Magnolia. Then again, I’ve never really made much use of the “social” part of “social bookmarking”.

So who knows? Maybe I’ll end up moving back to del.icio.us at some stage. It’s reassuring to know that moving my data around between these services is pretty straightforward: I can export from Magnolia and import into del.icio.us any time I want.

Anti-pattern begone

Last week Google announced something called the Contacts Data API. This makes me a very happy camper indeed.

I’ve made no secret of my loathing for the password anti-pattern. Asking for a GMail username and password on a third-party website is just plain wrong. I’ve said it before and I’ll say it again: it teaches users how to be phished. I spoke about this at the Social Graph Foo Camp, naming and shaming implementors of the anti-pattern:

Brave representatives from Facebook, Plaxo, Twitter, LinkedIn, Dopplr and Pownce showed up to be named and shamed (though most of the shame was reserved for Google in not providing an API for contacts).

To be honest, the impression I got from Google was that I shouldn’t hold my breath but now that they’ve stepped up to the plate and provided an API, there’s really no excuse for websites to ask users to enter their GMail username and password. The API uses AuthSub now but Kevin announced at SXSW that it will support OAuth at some unspecified future date.

Within 24 hours of the Contact Data API’s release, Dopplr had already removed the username/password form and implemented the handshake authentication instead. Bravo, Matt!

So who’s going to be next? Place your bets now. Here are my nominations for the next contenders:

  1. Pownce
  2. LinkedIn
  3. Digg
  4. Twitter
  5. Last.fm

C’mon Leah, don’t let me down.

I’m starting to see a pattern. Whenever I bitch and moan about something, it seems to get fixed:

I think I might be suffering from some sort of reverse paranoia. The whole world seems to be out to help me.

Update: Well, shame on me for not including Flickr in the list of contestants. They’ve implemented a superb import feature that never once asks for a third-party password. Bravo, Flickrites!

Foo camping

The day that I was flying to San Francisco, Simon and Nat were flying to New Zealand for Kiwi Foo and Webstock so we shared a bus to Heathrow. They both look knackered because they had attempted to “get on New Zealand time” by staying up all night. We parted at the airport: See you in Austin I said. Good luck decentralising the social graph he replied.

Since arriving in San Francisco, I’ve spent most of my time trying to meet up with as many people as possible. A hastily-convened microformats/geek dinner helped to accomplish that.

Now I’m in Sebastopol for the SG Foo Camp. The letters SG stand for Social Graph, which is unfortunate—I’m not a big fan of that particularly techy-sounding term. That said, I’m really looking forward to hearing more from Brad Fitzpatrick about the new Social Graph API from Google. It isn’t the first XFN parser but it’s the only one with Google’s infrastructure. The data returned from spidering my XFN links is impressive but the fact that it can also return results with inbound links is very impressive, although it takes significantly longer to return results and often times out.

For most people, today’s big news was Microsoft licking its lips at Yahoo but that was completely eclipsed by the new API for me. While I was waiting at Tantek’s for Larry and Chris to drive by and pick us up, I spent my time gleefully looking through the reams of information returned from entering just one URL into the API. Just now, I was chatting with John Musser from Programmable Web and we were thinking up all the potential mashups that this could open up.

I’m not going to build anything just yet though. I’m far too tired. I need to find a nice quiet corner of the O’Reilly office to unroll my sleeping bag.

Lock up your data

There have been a number of experiments carried out to investigate the effects of video on communication. I recall hearing about one experiment done with mothers and babies. The mothers were placed in one room with a video camera and the babies were placed in another room with a monitor showing a video feed from the mother. The babies interacted just fine with the video representations of their mothers. Then a one second lag was introduced. The babies freaked out.

I was reminded of this during the closing panel on day two of Fundamentos Web. Tim Berners-Lee dialed in via iChat to join a phalanx of panelists in meatspace. Alas, the signal wasn’t particularly strong. Add to that the problem of simultaneous translation, which isn’t really simultaneous, and you’ve got a gap of quite a few seconds between Asturias and Sir Tim’s secret lair. The resultant communication was, therefore, not really much of a conversation. It was still fascinating though.

Some of the most interesting perspectives came from George and Hannah—the people who are working at the coalface of social media. George asked Sir Tim for advice on the cultural side-effects of open data—how to educate people that publishing on sites like Flickr means that your pictures can and will be viewed in other contexts. Interestingly, Sir Tim’s response indicated that he was more concerned with educating people in how to keep their data private.

This difference in perspective might be an indication of a generation gap. The assumption amongst, say, teenagers is that everything is public except what they explictly want to keep private. The default assumption amongst older folks (such as my generation) is the exact opposite: data is private except when it is explictly made public. The first position matches the sensibilities of Flickr and Last.fm. The second position is more in line with Facebook’s walled garden approach.

I was really glad that George raised this issue. It’s something that has been occupying my mind lately, particular in reference to Flickr.

Flickr provides a range of ways of accessing your photos; the website, RSS, KML, LOL… and of course, the API. It’s a wonderful API, certainly the best one that I’ve played with. I had a blast putting together the Flickr portion of Adactio Elsewhere.

Using the API, I was able to put together my own interface onto my photos and the latest photos from my contacts. There’s nothing particularly remarkable about that—there are literally hundreds, if not thousands, of third-party sites that use the Flickr API to do the same thing. However, a lot of those sites use Flash or non-degrading Ajax. But I use Hijax. That means that, even though I’ve built an Ajax interface, the fundamental interaction is RESTful with good ol’ fashioned URLs. As a result—and this is just one of the benefits of Hijax—the Googlebot can spider all possible states of my application.

You can probably see where this is going. It’s a similar situation to what happened with my pirate-speak page converter. Even though I’m not providing a direct interface onto anyone’s pictures, Google is listing deep links in its search results.

This has resulted in a shitstorm on the Flickr forum. Reading through the reactions on that thread has been illuminating. In a nutshell, I’m getting penalised for having search-engine friendly pages. I, along with some other people on that thread, have tried to explain that Adactio Elsewhere is just one example of public Flickr data appearing beyond the bounds of Flickr’s domain—an issue tangentially relatred to intellectual property rights.

In this particular sitution, I was able to take some steps to soothe the injured parties by creating a PHP array called $stroppy_users. I also added a meta element instructing searchbots not to index Adactio Elsewhere which, I believe, will prevent any future grievances. As I said in the forum:

If a tree falls in the forest and Google doesn’t index it, does it make a noise?

I think the outburst of moral panic on the Flickr forum is symptomatic of a larger trend that has accompanied the growth of the site’s user base. Two years ago, Flickr was not your father’s photo sharing website. Now, especially with the migration from Yahoo Photos, it is. If you look at some of the frightened reactions to Flickr’s pirate day shenanigans you’ll see even more signs of this growth (Tom has a great in-depth look at the furore).

As sites like Flickr and Last.fm move from a user base of early adopters into the mainstream, this issue becomes more important. What isn’t clear is how the moral responsibility should be distributed. Should Flickr provide clearer rules for API use? Should Google index less? Should the people publishing photos take more care in choosing when to mark photos as public and when to mark photos as private? Should developers (like myself) be more cautious in what we allow our applications to do with the API?

I don’t know the answers but I’m fairly certain that we’re not dealing with a technological issue here; this is a cultural matter.

Help me at Hackday

Hackday is almost upon us. Tomorrow, I—along with hundreds of other geeks—will be converging on Alexandra Palace in North London for two days of dev fun.

I’ve got an idea for what I want to do but I think I’ll need lots of help. At XTech, Reboot, @media and other recent geek gatherings I’ve been asking who’s coming and who fancies helping me out. I’ve managed to elicit some interest from some very smart people so I’m hoping that we can hack something fun together.

Here’s the elevator pitch for my idea: online publishing is hacking and slaying.

Inspired by Justin Hall’s idea of Passively Multiplayer Online Games and Gavin Bell’s musings on provenance, I want to treat online publishing as an ongoing way of building up a character. In Dungeons and Dragons or World of Warcraft, you acquire attributes like stamina, strength, dexterity and skill over time. Online, you publish Flickr pictures, del.icio.us links, Twitter updates and blog posts over time. All of this published material contributes to your online character and I think you should be rewarded for this behaviour.

It’s tangentially related to the idea of a lifestream which uses RSS to create a snapshot of your activity. By using APIs, I’m hoping to be able to build up a much more accurate, long-term portrait.

I’m going to need a lot of clever hackers to help me come up with the algorithms to figure out what makes one person a more powerful Flickrer or Twitterer than another. Once the characteristics have been all figured out, we can then think about pitching people against each other. Maybe this will involve a twenty-sided die, maybe it will more like Top Trumps, or maybe it could even happen inside Second Life or some other environment that has persistent presence (the stateless nature of the Web makes it difficult to have battles on a Web site). I have a feel that good designers and information architects would be able to help me figure out some other fun ways of representing and using the accumulated data. Perhaps we can use geo data to initiate battles between warriors in the same geographical area.

Sound like fun? Fancy joining in? Seek me out on the day or get in touch through my backnetwork profile.

Of course, if you want to do something really cool at hackday, you’ll probably be dabbling with arduino kits, blubber bots and other automata. When I was San Francisco a few weeks ago, nosing around the Flickr offices, Cal asked me what I was planning for Hackday. “Well” I said, “it involves using APIs to…” “Pah!” he interrupted, “APIs are passé. Hardware is where it’s at.”

Machine Tags of Loving Grace

One of the highlights of Refresh Edinburgh for me was listening to Dan Champion give a presentation on his new site, Revish. He talked through the motivation, planning and production of the site. This was an absolute joy to listen to and it was filled with very valuable practical advice.

Revish is a book review site with a heavy dollop of social interaction. Even in its not-quite-finished state, it’s pushing all the right buttons with me:

  • The markup is clean, semantic and valid.
  • The layout is uncluttered and flexible.
  • The URL structure is logical.
  • The data is available through microformats, RSS and an API.

There’s some really smart stuff going on with the sign-up process. If your chosen username matches a Flickr username, it automatically grabs the buddy icon. At the sign-up stage you also have the option of globally disabling any Ajax on the site—an accessibility option that I advocate in my book. Truth be told, there isn’t yet any Ajax on the site but the availability of this option shows a lot of forethought.

Also at the sign-up stage, there’s a quick’n’dirty auto-discovery of contacts wherever there’s overlap with Revish usernames and your Flickr contacts. This is very cool—one small step toward portable social networks.

One of the features dovetails nicely with Richard’s recent discussion about machine tags ISBNs. If you tag a picture of a book on Flickr with book:isbn=[ISBN number], that picture will then show up on the corresponding Revish page. You can see it in action on the page for Bulletproof Ajax.

Oh, and don’t worry about whether a book has any reviews on Revish yet: the site uses Amazon’s API to pull in the basic book info. As long as a book has an ISBN, it has a page on Revish. So the Revish page for a book can effectively become a mashup of Amazon details and Flickr pictures (just take a look at the page for John’s new microformats book).

I like this format for machine tagging information related to books. As pointed out in a comment on Richard’s post, this opens up the way for plenty of other tagging like book:title="[book title]" and book:author="[author name]".

I’ve started to implement this machine tag format here. If you look at my last post—which has a whole list of books—you’ll see that I’ve tagged the post with a bunch of machine tags in the book:isbn format. By making a quick call to Amazon, I can pull in some information on each book. For now I’m just displaying a small cover image with a link through to the Amazon page.

That last entry is a bit of an extreme example; I’m assuming that most of the time I’ll be just adding one book machine tag to a post at most, probably to accompany a review.

Machine tags (or triple tags) is still a relatively young idea. Most of the structures so far have been emergent, like Upcoming and Last.fm’s event tags and my own blog post machine tags. There’s now a site dedicated to standardising on some namespaces—MachineTags.org has a blog, a wiki and a mailing list. Right now, the wiki has pages for existing conventions like geo tagging and drafts for events and book tagging. This will be an interesting space to watch.

Ghost in the Machine Tags

Richard has some very nifty ideas up his sleeve for the next iteration of his site. Some of these are design-related and some are technical. He just gave a peek into the technical side of things by explaining how he’s using tags to tie content together. Not just any old tags, mind: machine tags.

You may remember that Flickr rolled out machine tags a while back. That’s their name for what’s basically tripletags; tags that take the form of namespace:predicate=value. There’s some tight integration between Upcoming and Flickr using the machine tag upcoming:event=[ID]. You can see a looser coupling (one way rather than bi-directional) in the recently-updated events section of Last.fm which uses lastfm:event=[ID]. As an example, take a look at the page for a Low Lows concert I went to and took pictures of.

Richard is making use of machine tagging to associate his Flickr pictures with his blog posts. He’s also planning to use Amazon’s API to associate ISBN numbers with blog posts, raising the question of which namespace to use:

We therefore need a triple-tag version of the ISBN tag, and here’s my suggestion: iso:isbn=0713998393. ISBN is a standard recognised by the International Organisation for Standardization (ISO) so I thought it made a certain sense for ISO to be the namespace. Other standardised entities could be tagged in a similar way, such as iso:issn=15340295.

Seems like a sound idea to me. I might experiment with machine tagging reviews here in that way and then pulling in complementary information from Amazon.

But that’s for another day. For now, I’ve gone ahead and integrated Flickr machine tagging here… but this works from the opposite direction. Instead of tagging my blog posts with flickr:photo=[ID], I’m pulling in any photos on Flickr tagged with adactio:post=[ID].

Now, I’ve already been integrating Flickr pictures with my blog posts using regular “human” tags, but this is a bit different. For a start, to see the associations using the regular tags, you need to click a link (then the Hijax-y goodness takes over and shows any of my tagged photos without a page refresh). Also, this searches specifically for any of my photos that share a tag with my blog post. If I were to run a search on everyone’s photos, the amount of false positives would get really high. That’s not a bug; it’s a feature of the gloriously emergent nature of human tagging.

For the machine tagging, I can be a bit more confident. If a picture is tagged with adactio:post=1245, I can be pretty confident that it should be associated with http://adactio.com/journal/1245. If any matches are found, thumbnails of the photos are shown right after the blog post: no click required.

I’m not restricting the search to just my photos, either. Any photos tagged with adactio:post=[ID] will show up on http://adactio.com/journal/[ID]. In a way, I’m enabling comments on all my posts. But instead of text comments, anyone now has the ability to add photos that they think are related to a blog post of mine. Remember, it doesn’t even need to be your Flickr picture that you’re machine tagging: you can also machine tag photos from your contacts or anyone else who is allowing their pictures to be tagged.

I realise that I’m opening myself up for a whole new kind of spam. But any kind of spam that requires namespaced tagging on a third-party site is pretty dedicated. If someone actually goes to that much effort to put a thumbnail of an inappropriate image at the end of one of my blog posts, I probably wrote something particularly inflammatory in that post—which would make the associated thumbnail a valid comment, I guess.

Here are some examples of posts I’ve been machine tagging on Flickr:

Once again, like Upcoming and Last.fm, these are event-based. But the machine tagging would work equally well for location-based posts. So when I go up to Scotland next week and blog about it, I (or you or anybody) can then go to Flickr, find some nice pictures of Edinburgh and using the adactio namespace, associate the pictures with the blog post.

It’s a strange mixture of RESTful URLs here and taggable objects there.

If nothing else, this will be an interesting experiment. Machine tags don’t have the low barrier to entry of regular tagging but they aren’t as complex as something like RDF. It might be that they hit the sweet spot between accuracy and ease of use.

Oh, and if you find any Flickr pictures related to this blog post, tag them with adactio:post=1274.

Beautiful hackery

While I had Matthew in my clutches, I made him show me around the API for They Work For You. Who knew that so much could fun be derived from data about MPs?

First off, there’s Matthew’s game of MP Top Trumps, ‘though he had to call it MP Fab Farts to avoid getting a cease and desist letter.

Then there’s a text adventure built on the API. This is so good! Enter your postcode and you find yourself playing the part of your parliamentary representative with zero experience points and one hundred hit points. You must work your way across the country, doing battle with rival MPs, as you make your way towards Sedgefield, the lair of Blair.

You can play a Web version but for some real old-school fun, try the telnet version. This reminded me of how much I used to love text adventures back in the days of 8-bit computers. I even remember trying to write my own in BASIC.

For what it’s worth, Celia Barlow, MP for Hove, has excellent pesteredness points. I made it all the way up to Sedgefield and defeated Tony Blair in battle. My prize was the source code of the adventure game in Python.

Ah, what larks!

There’s another project that Matthew works on that I find extremely useful. He has created accessible UK train timetables using the data from the National Rail site, a scrAPI if you will. This is where I go whenever I need to plan a train journey.

The latest feature is something that warms the cockles of my heart: beautiful, hackable URLs. If I want a list of trains going from Brighton to London, I can just type:


It handles spaces (or pluses or underscores) too:

http://traintimes.org.uk/brighton/london victoria

The URL can also be extended with a departure time:

http://traintimes.org.uk/brighton/london victoria/14:00

My address bar is my command line. This is the kind of design that makes URL fetishists like Tom very happy.

Taking back the Web

I’m at an event called Take Back The Web. It’s a cosy little unconference aimed at non-profits and activist groups.

There’s been plenty of education and discussion going on all day, mostly around things like blogs, wikis, RSS and podcasting. I followed up the RSS talk with a little spiel about APIs and how they can be used to pull in data from other places on the web.

I’m used to attending geekier events where everyone is fairly tech-savvy, but the crowd here is mostly made up of people on the ground who want to be able use technology but who aren’t necessarily from a technological background. It really brought home to me just how far we have to go in making this stuff less geeky and scary-sounding.

Just about everyone gets blogs, and it’s pretty easy to get started with them. Wikis are a little bit trickier, but still attainable. RSS becomes harder again: it’s still too hard to subscribe, and even the term “subscribe” is itself misleading, implying payment. As for APIs, that’s still all pretty much rocket science so I just gave a basic overview of the benefits without really discussing the nitty-gritty of programming.

Notice how the terms change in complexity along that scale: from the word blog to the term API. We’re using way too many acronyms and technobabble for this stuff. Of course, we can’t change the names without upsetting the geeky programmers.

I got a lot of food for thought from the day so far, even though I already know about the technologies. It’s been fascinating to see how people are using the web now and also how much more they could be doing.

The guys from mySociety/They Work For You are talking through their services now and I’ve just found about this nifty API. I’ll have a play around with that. I’ll quiz Matthew about it later; he’s staying over with me. More grist for the bedroll.

Pictorial Ajaxitagging

I talked a while back about how I was attempting to add some extra context to my posts by pulling in corresponding tag results from Del.icio.us and Technorati, and then displaying them together through the magic of Ajax.

It struck me that there was another tag space that I had completely forgotten about: Flickr. Now at the end of any post that’s been tagged, you’ll find links entreating you to pull in any of my Flickr pics that have been likewise tagged.

This is all possible thanks to a single method of Flickr’s API. I’m reusing the same method to search for other pictures too…

A had a little epiphany in the pub the other night, chatting after the WSG meetup. I was talking about geotagging and I mentioned that it probably won’t be too long before just about every file will be geotagged in the same way that just about every file already has a time stamp. Then I realised, “hey, all my blog posts have time stamps and so do all my Flickr pics!”

So I added an extra link. You can search for any pictures of mine that were taken on the same day as a journal entry. I like the extra context that provides.

While I was testing this new functionality, I couldn’t figure out why some pictures weren’t being pulled in. Looking at the post from the Opera event written on Tuesday, I expected to be able to view the pictures I took on the same night. They weren’t showing up and I couldn’t understand why not. I assumed I was doing something wrong in the code. As it turned out, the problem was with my camera. I never reset the date and time when I came back from Australia, so all the pictures I’ve taken in the last couple of weeks have been off by a few hours.

Keep your camera’s clock updated, kids. It’s valuable metadata.

Hmmm… I guess I should take a picture today to illustrate the new functionality. In the meantime, check out this older post from BarCamp to see the Ajaxitagging in action.