Monday, June 11th, 2018
Sunday, May 13th, 2018
I’ve made no secret of my admiration of Jocelyn Bell Burnell, and how Peter Saville’s iconic cover design for Joy Division’s Unknown Pleasures always reminds of her.
There are many, many memetic variations of that design.
I assumed that somebody somewhere at some time must have made a suitable tribute to the discover of those pulses, but I’ve never come across any Jocelyn-themed variation of the Joy Division album art.
The test order I did just showed up, and it’s looking pretty nice (although be warned that the sizes run small—I ordered a large, and I probably should’ve gone for extra large). If your music/radio-astronomy Venn diagram overlaps like mine, then you too might enjoy being the proud bearer of this wearable tribute to Dame Jocelyn Bell Burnell.
Sunday, May 8th, 2016
Thursday, April 7th, 2016
Mistakes on a plane
I’m in Seattle. An Event Apart just wrapped up here and it was excellent as always. The venue was great and the audience even greater so I was able to thoroughly enjoy myself when it was time for me to give my talk.
I’m going to hang out here for another few days before it’s time for the long flight back to the UK. The flight over was a four-film affair—that’s how I measure the duration of airplane journeys. I watched:
- Steve Jobs,
- The Big Short,
- Spectre, and
I was very glad that I watched Joy after three back-to-back Bechdel failures. Spectre in particular seems to have been written by a teenage boy, and I couldn’t get past the way that the The Big Short used women as narrative props.
I did enjoy Steve Jobs. No surprise there—I enjoy most of Danny Boyle’s films. But there was a moment that took me out of the narrative flow…
The middle portion of the film centres around the launch of the NeXT cube. In one scene, Michael Fassbender’s Jobs refers to another character as “Rain Man”. I immediately started to wonder if that was an anachronistic comment. “When was Rain Man released?” I thought to myself.
It turns out that Rain Man was released in 1988 and the NeXT introduction was also in 1988 but according to IMDB, Rain Man was released in December …and the NeXT introduction was in October.
The jig is up, Sorkin!
Sunday, October 21st, 2012
Peter Saville talks about the enduring appeal of his cover for Unknown Pleasures.
I like to think of all the variations and mashups as not just tributes to Joy Division, but tributes to Jocelyn Bell Burnell too.
Wednesday, June 15th, 2011
Testing James Joyce: this is like the Seven Bridges of Königsberg puzzle but with Guinness.
Wednesday, February 9th, 2011
Hooky never looked so good.
Thursday, June 21st, 2007
Peter Saville is releasing some of his fonts for free. I'm grabbing the beautiful serif typeface used on the front of Joy Division's "Closer"; it's gorgeous.
Sunday, October 29th, 2006
The song Lay Low by My Morning Jacket is the eighth track on the album Z. When the song starts, it seems like your typical My Morning Jacket song, ‘though perhaps a bit more upbeat than most. For the first few minutes, Jim James sings away in his usual style.
At precisely three minutes and three seconds, the vocals cease and the purely instrumental portion of the song begins. As one guitar continues to play the melody line, a second guitar begins its solo.
It starts like something from Wayne’s World: a cheesy little figure tapped out quickly on the fretboard. But then it begins to soar. Far from being cheesy, it quickly becomes clear that what I’m hearing is the sound of joy articulated through the manipulation of steel strings stretched over a piece of wood, amplified by electricity.
As the lead guitar settles into a repeated motif, the guitar that was previously maintaining the melody line switches over. At exactly three minutes and thirty seconds, it starts repeating a mantra of notes that are infectiously simple.
For a short while, the two guitars play their separate parts until, at three minutes and thirty three seconds, they meet. The mantra, the riff — call it what you will — is now being played in unison, raising my spirits and pushing the song forward.
The guitars remain in unison until just after four minutes into the song. Now they begin to really let loose, each soaring in its own direction as the rest of the band increase the intensity of the backing.
Two seconds before the five minute mark, the guitar parts are once again reunited, but this time in harmony rather than unison. At five minutes and nineteen seconds, a piano — that was always there but I just hadn’t noticed until now — begins to pick out a delicate tinkling melody in a high register. It sounds impossibly fragile surrounded by a whirlwind of guitars, drums, and bass, but it cuts right through. And it is beautiful.
At five minutes and twenty six seconds, as the piano continues to play, one of the guitars drops down low and starts growling out its solo. From there, everything tumbles inevitably to the end of the song.
The band stops playing at five minutes and fifty seconds, but we’re given another twenty seconds to hear the notes fade to silence. The instrumental break has lasted three minutes. It is the most uplifting and joyous three minutes that has ever been captured in a recording studio.
Now, divine air! now is his soul ravish’d! Is it not strange that sheep’s guts should hale souls out of men’s bodies?
Friday, October 6th, 2006
OK. Yeah, I’m not going to be talking about big business benefits or you know business models or anything like that. This is going to be a much more sort of a personal look at APIs. I am not from Yahoo!, I’m not from Amazon. I’m not from a big company. I’m just a lone developer and I want to talk about my experiences. Specifically, I wanted to talk about the fun, the happiness that you get from APIs because you can get a lot of joy out of it and mostly though, I just want to talk about me. So that’s what I will be doing for the next 45 minutes.
So joy, where do we get joy from? Ignoring the obvious answers to that question, let’s focus specifically on technology and where does this joy in technology intersect and I am going to cast my mind back to the first time I remember getting joy out of technology. Does anybody else? Did anyone else have a ZX81? Yeah, OK, cool. 1K, 1K of memory was what you had to work with, with this. You had to be very creative. But it was a lot of fun. Everyone has done it
10 print, 20 goto 10. OK, we’ve all done it. It’s just fun. It’s fun playing around. Later on, we’ve all got our Spectrums and we started playing adventure games and stuff like that (Thorin sings of Gold).
Many years later I left computers aside. I sort of went off wandering, hitchhiking and busing across Europe for a while. Many years later I came across the Web and I didn’t get the Web at first till I came across this site Fray.com which was years ahead of its time. I was surfing around. I was looking at all these web sites and I was “I don’t get it”. I don’t get why the Web is big and I came across Fray.com and I got it because I had an emotional reaction. What it was, was stories. People were telling stories and the design was always driven by the story. Story drives design which is something I’ve held onto, until this day. Fray made me want to learn HTML and that was where I found joy, which was with HTML. This idea of marking stuff up with tags, I loved it. I really got into it and I decided I wanted to do this. I wanted to play around with this.
Let me just show you that. Warning: code ahead. But it’s sort of pseudo-code and it’s not going to get very complex, it’s more just the idea of it. How do you use these web services — and really nitty gritty — how do you even start to use them? You need some kind of server-side language to do this. I was using PHP, so that’s what I’m using here. So first begin with the URL — the URL to the particular method on the Amazon web services. Start with a URL and you start adding all of these parameters to it, so my Amazon associates ID so I will make some money off of these sales. My API key. Most API’s make you sign up first — they give you a unique key and you have to use that in every request you make, so that’s mine: don’t use it. The important one in red there: I’m pointing to an XSL file. I’m saying don’t send me back raw XML. Before you send the result back to me run it thorough that XSL file. So I put all of those parameters onto the URL and finally I just spit out what’s left. So I’m using very simple PHP functions to do that.
Here’s what the Amazon web service would send back, roughly. This is a very simplified version. But it’s an XML file full of stuff, and some of this stuff in here is the information I want to get out. So there’s the name of the band, the name of the album and that’s what I want to get out and then I want to mark it up in markup. So, I run that through an XSLT file to do the transformation. XSL is an interesting one because you can see it’s tag based. It’s like XML. It’s a declarative language, but it’s also got sort of programmatic aspects to it because you can select things, you can do loops. It’s a strange mixture of programming and mark up. A lot of people have a love/hate relationship with it. It’s actually pretty powerful and it’s a good skill to have. And the nice thing is that you can mix it up with regular mark up. So I’m mixing it up with HTML. And it’s like a template language, it will insert the relevant data in there and the result is I get mark up with the empty spots filled in with the data I wanted and then I can display that on my site like that. And that was a lot of fun. That was actually joyful. I got a lot of fun out of doing that. And, you know, I make a bit of money when people click through, but that wasn’t really my reason for doing it.
This is all possible because of this idea called REST, which supposedly stands for Representational State Transfer. And it’s more an idea of how to build these services together. There’s a number of different sorts of thinking behind what make something restful, but basically what it comes down to is you have these resources that are uniquely addressable, usually through a URL. And it’s stateless. There is no keeping track of who is logged in and whose not. And if this kind of thing looks familiar — this idea of stateless, uniquely addressable resources — that’s because that’s the Web. That’s how the World Wide Web was built, using a RESTful method, methodology, really. It’s pretty easy, because once you get your head around that you realize all you do is put together a URL and grab that and there’s your information from a third party service.
The other idea, the sort of opposite to REST is something called SOAP, which is an acronym S.O.A.P. and I think we all know what that stands for. I’m not going to go into much detail about SOAP because it is far too complex, unnecessarily complex really. I have played around with SOAP. I had to because one of the other things I wanted to put on that site was Google search. I wanted people to be able to do everything from my site. I didn’t want them to have to go off to Google just to do a search. And you can use Google’s API to get searches on your own site, but it’s only SOAP, you have to use SOAP. And I’m not even going into the technical details. Basically, you have to create an XML file, send that off, then you get the XML file. Way too complex, not much joy to be had. It would be nice if Google would provide a RESTful interface. I did do it and, you know, I got this interface on my site and into Google’s search facility, which was nice, but not much fun.
Where I did get joy was when I started using Flickr. I love Flickr. I really like Flickr. I started using it when I came back from South by Southwest last year. Everybody was using Flickr and I decided I needed to check this out. I started taking pictures, putting them up there. I was reluctant at first because I already had image galleries on my own site and I didn’t really didn’t want a third party hosting my images, but they did such a good job I decided to go with them. Then I started exploring other services to host my stuff. So, Delicious for hosting bookmarks and then I started using Upcoming for keeping track of events and stuff like this. So a lot of my data started going outwards with other services providing the infrastructure. So I have my own website and all around that, all over the web there was all these bits of me. So, Amazon would have my wish list and Flickr is where I’m keeping my pictures. Del.icio.us, that’s where my bookmarks are and Upcoming, that’s where all of my events are. So, they’re all over the place, scattered over the web and I wanted to be able to draw them into that central location which was my own site and that was possible because of APIs. Flickr provides loads and loads of methods so that you can do just about anything. Del.icio.us also provides an API, a little trickier it must be said because it requires a secure authentication thing but you can still get all your bookmarks. Upcoming, you can also get events and stuff, again, not quite as friendly as Flickr’s but pretty powerful.
getElementsByTagName — that’s one of the nice things about the DOM, you get to recycle your knowledge — and then I can spit it out onto the screen however I want.
My original idea was I would bring this into my blog. I have a side bar down the side and I thought Oh just pull in, you know, what lots of other people are doing, a couple of Flickr pics and the next few events I will be going to on Upcoming and maybe a couple of Del.icio.us links but it grew and grew and grew into this sort of monster where I ended up creating a whole sub domain, Adactio Elsewhere to hold all of this stuff.
So I didn’t just put in my Flickr picks. I put in my Flickr pics, the last few Flickr pics from my contacts, also the list of all my contacts and you don’t just have to look at my pictures, you could click on anybody’s name and then you will see their pictures and their contact’s pictures which is one of the funnest things about Flickr is following these trails. Here are my pictures. This person’s a friend, here’s this person’s pictures, here’s the pictures of their friend and you end up going into through this whole network of pictures. Great fun.
Amazon, as well as doing a search that I was doing on the other site, I already had the code for that so I just threw it in there, also having my wish list here so I could have easy access to it. Delicious, pulling in the last links. Also being able to search my links straight from this one place and Upcoming, just grab the newest events. All on one page, all on an AJAXy sort of interface. That was something else I was learning about at the time. So pretty good fun.
Let’s see how they stacked up. Let’s rate the APIs of these various services.
I am going to rate them by the amount of power you get from them, documentation and just plain joy: that indefinable something. And so, first of all: the power. Flickr provides APIs for just about everything. I think the only thing you can’t do with the Flickr API is get the comments. You can get the number of comments on a photograph but you can’t get the actual comments on the photograph but pretty much everything else you can do yourself. You can create a one to one copy of Flickr almost just using their APIs and this is probably because they use the APIs themselves. They eat their own dog food.
Now Amazon is very powerful as well. Again, you can create an entire Amazon of your own pretty much and put a nicer interface on it, put nicer markup in there. [Laughter] At some point you do have to go out and go to the checkout process. You can’t quite go all the way through but pretty close.
Del.icio.us and Upcoming, yeah a little tricky, I’ve got to say. They make you jump through a lot of hoops. You can’t just grab all the information about an event, you have to grab the event id, then you have to grab the location id and then do different methods for each one of those to get the details. They make you jump through hoops, Del.icio.us, the thing that is kind of tricky, you have a lot of access to your own bookmarks but not so much access to other people’s. So there’s kind of limited room for mashups involving other people’s bookmarks there.
Documentation, let’s see how they did. Pretty good. Both Flickr and Amazon scored well. They have good online documentation on their sites but almost more importantly they have good communities. There’s mailing lists and of course there’s a Flickr group. A good place to go and discuss is the Flickr mailing list. The Flickr developers themselves hang out there so you can get answers pretty quickly from people who really know their stuff. Amazon, there’s a lot of documentation on the site. It’s improving all the time. It’s something they are putting a lot of effort into. Del.icio.us and Upcoming less so but you have to remember where they came from. They were small startups and they were only bought up by Yahoo! fairly recently. So basically a couple of guys in their bedroom doing each of those services so their documentation is understandably a little thinner on the ground.
So, let’s see, the final scores please. How did the APIs do? The winner there is Flickr for joy and it is partly down to the subject matter. There is just so much fun messing around with photographs. Amazon is good. Amazon came second but you are dealing with products and shopping and stuff like that and there is a lot of fun to be had there but there is just something sexier about photographs, I think, so Flickr has a bit an unfair advantage and the others did pretty well. So there’s the score card. Those are just four service. There are hundreds if not thousands more APIs out there. This site keeps track of all the different services that are out there being provided by all these providers, from the very big to the very small.
There are a lot of APIs out there and what this cornucopia of APIs creates is a parallel Web. Instead of being a Web of documents, it’s like a Web of data. The kind of data you get there is identity for one; my Flickr photographs, my Del.icio.us things, all of these things that are mine. The ultimate identity API would be where you can actually safely store user names, passwords, credit cards with a third party and have that trusted, but that’s still future talk but right now, my things. Events obviously, so upcoming for that, eventful, there are a whole bunch of services based around events. Relationships, that’s where all the social networking stuff comes into it. There’s a lot of social networking sites out there all built pretty much the same way. Most of them have APIs. And here’s one that I haven’t even touched on: location. Location is a really big one and location is kind of like I said Flickr was fun because of the subject matter. Location is really fun because of the subject matter.
Dealing with maps is an awful lot of fun. I remember the joy I had when Google Maps came out. Does anyone remember what it used to be like when you would have to try and browse a map and all you had was Mapquest? I mean these days if I come across a Mapquest image on a website, I try to drag it. I forget that’s it is just an image. I have become so used to Google Maps and all the great maps out there now, I expect to be able to pan around a map. I had so much fun with Google Maps when it came out, not doing anything useful just browsing the world, you know just looking at landmarks. It was very enjoyable.
I have seem others that I really like and Eric Meyer did this one which is the High Yield Detonation Simulator [Laughter] after he had an argument one evening with somebody at dinner whether New Jersey would get blown off the map if New York got hit by a bomb. So you can put in a location and the amount of kilotons. [Laughter] So this is 100 kiloton bomb dropping on Brighton, dropping actually on the Corn Exchange. It’s OK. The university survives. That’s a nice one.
My absolute favorite mashup with Google Maps is this thing called Overplot where they’ve taken a blog… has anyone heard of Overheard in New York? It’s this blog where people submit little snippets of conversation they overheard on the street. They submit the little snippet but also submit where they heard it, corner of 5th and Broadway or whatever. Well this guy took the entire archive of that site by just going through the Google reader, RSS reader, mashed it up with the maps to create this map of New York with all the conversations mapped onto it. You can zoom in and you see all the conversations on each street corner and you can zoom in and you can start eavesdropping and you get the context. That’s the really great thing about maps. It’s one great thing to read, you know, that this [Laughter] occurred on the street corner. It’s another thing where you can actually see the surrounding area and you will love it. It’s a complete time sink. If you go to this site, be prepared to spend hours [Laughter], I mean hours, looking around. It’s great. That’s the great thing. Context. Context is something that maps do really well.
Something similar to this is something that Gawker did. They put together the Gawker Stalker because they used to have people call in celebrity sightings. “Hey, George Clooney is having lunch at this restaurant downtown.” Well they took all these sightings again and they mashed it up with location. George Clooney hates this site. He really doesn’t like it. So you can see celebrities being stalked all over the place and I decided I wanted to try messing around with this. I wanted to check out the APIs.
So I found an opportunity to mess with Google maps this year at South by Southwest. There was a lot of parties going on. I had gotten together all the parties, written them all up an HTML page, marked them all up and I wanted to mix it up with Google maps so that I could see how far apart these parties were and then I could decide which ones I can go to and which ones I can avoid. So you’d click on a particular place and say that’s where it is right there, right where the big beer logo is and you click on other places and decide well, that party is too far. You get a lot more context with maps. It’s really good fun.
This is an idea I recycled for d.Construct, same thing. I had all this information about places we can go grab some lunch, pubs that have free wi-fi and instead of just providing that in text form, mash it up with a map and then you can see how far away — there, that is how far away the best Japanese restaurant in Brighton is from the Corn Exchange — then you can make a decision based on that.
Parallel to this web of data, there is something else going on. There is the live Web. Data tends to be fairly static, you know Amazon products, Flickr pictures, they’re there, you can go to them anytime. There is a lot of stuff going on the Web now that’s just like drinking through the fire hose. There is just so much going on and mostly through the RSS is the best way of getting access to this stuff because it is so fast, you need some way of keeping track of it.
And the other way of making sense of it is tagging and a site that’s really done the best job of combining these two things together, keeping track of all these conversations, there are lots of sites doing this but Technorati is particularly interesting for keeping track of the live Web and one of the reasons why I find it interesting is the fact that it provides an API. Again, RESTful, you just point your code to the URL. Send it to parameters and you get back XML and you have to parse that XML.
So I decided to use the power of the live web on my own site. I started tagging my own posts a while back so you can see about my own posts and then I added in calls to the APIs for Del.icio.us to get my own links and to Technorati to see what other people are saying about the same tags and then display that in the same page. So this is first of all, who is linking to this site? That’s really useful. It’s like what you need your stats for but here I get it inline. Who is talking about what I’ve just blogged right now, this minute well I can get that information from Technorati and who is using the same tags? In a very meta move, this is who’s tagging with “tagging”. So I can keep track of all of that in one place. And it was good fun. The API is pretty straight forward for Technorati. Sadly, it’s pretty flaky. It must be said. Technorati could do with some new servers, I think, because it tends to drop a fair a bit. That’s just my own personal experience. Everything I am saying is my own personal experience. That was mine with the Technorati API, potentially fun, a little bit flaky, wouldn’t want to rely on it too much.
What that means is that mashing around with APIs and doing all this fun stuff is limited to kind of the alpha geeks, the people who know this stuff and like I said it’s good fun to play around with it but it’s a bit of a shame that everyone can’t join in the fun. Now some people are trying to change that. There are some services out there that are trying to bring APIs to the masses. There is a service called Ning with the idea being that you take existing mashups you clone it, you mess around with it, you do your own thing. I’m not sure how well this works really. I don’t think it is very clear describing how it works and who it is aimed at, but the idea is good, to free APIs up for everyone not just the geek developers.
Another interesting one is called Dapper where you point it at some pages. Let’s say there’s a site that doesn’t have an API, and you want to mess with that site’s data. Well you start surfing that site through that Dapper browser and you say, okay, this is a headline and that’s a description, okay, and on that page that’s a headline, that’s a description. And you show it a few pages and it sort of gets the idea. It says, Oh, okay, I see how this site is built, and then it can construct an API for you to use. Really, it’s like a very clever form of screen scraping. But, that’s what people are going to do anyway. If you don’t provide an API, if they really want the data, they’re going to get the data anyway by screen scraping. So that’s an interesting way of trying to open up every website to have APIs, but I don’t know how well that’s going to work.
And now, I would like to hear your thoughts about other things that I’ve talked about here: APIs, mashups, anything like that. If you want any of the URLs that I’ve talked about today, I’ve put them all up online at this URL here: adactio.com/extras/joyofapi. You’ve been very gracious listening to me babble on about myself today, so thank you very much.
Okay, hands up for questions. Oh, we’ve got bingo! Oh great, okay. Should we give out the prize now or the prize later?
Jeremy Keith: Okay, hands up for questions. Surely somebody has a question. Oh, there on the side, there’s one. Oh sorry, there’s one over here. Hang on a second.
David Barrett: Sorry, Jeremy. You say that it’s a shame that the alpha geeks are the only people that can use these APIs. But can you think of an use case where someone who wasn’t a geek would even want to use these?
Jeremy Keith: Yeah. You’ve got a blog. You don’t need to be a geek to have a blog. You don’t need to be a geek to have a live journal account or to have a Typepad account or to have your own website or to have a website about you favorite pets. That’s what was the great thing about the web. That’s what made the web explode when it was first introduced was that anyone could do it. Now of course what that meant was that it was a pretty sloppy web. Browsers were very forgiving in the markup that they allowed people to use and tried to do the best job that they could do anyway. But because anybody could create a website once they knew a bit of HTML, the web exploded. And there’s no reason why those same people should be excluded from getting the benefits of things like maps and events and all this other data. So anybody, because you don’t need to be a geek to have a website, so you shouldn’t have to be a geek to use an API. That is the spirit of the original web really.
We had one over on the side here, I think.
Jeremy Keith: Yes, it’ll be good to see more of these libraries of libraries, these meta-APIs that let you just switch between data providers at your call. But as Paul pointed out, that’s the great thing about a lot of these APIs. The first one to market gets to dictate the scene, and other APIs, if they want to compete, they had better follow the same sort of structure in their API or nobody’s going to use it. That’s the great thing.
One of the other things I want to mention about what if these services go away, what happens to my data and stuff, that’s actually one of the benefits of APIs, For instance, with Flickr, why should I trust Flickr with my photographs? Well, because of the API you know you’ll be able to get your photographs out any time you want. I’m not just talking about the alpha geeks who can code up something to get their data out. There’s third-party providers who will burn all your Flickr pics to DVD. You authorize them, you give them your Flickr user name, password, and they will do the burning for you using the API.
So APIs actually encourage trust. They say don’t worry, your data is safe with us, look, we’ve got an API, any time you want you’ll be able to take your data with you.
Another question? I think the first hand might have been down here, over here.
Paul Boag: Hi. You already mentioned that, on your own site, you’ve experienced some flakiness where things have gone down. Surely the more these APIs you include in them, the more calls you’re making to them, the more different services and different servers, and the more unstable you’re making your own site really, I mean are there ways around that to kind of mitigate that risk?
Jeremy Keith: Well, something like Mapstraction, like I was saying, if you had something that you could easily flick between providers to say, oh, Google Maps is being really flaky today, I’m switching over to Yahoo! Maps, the more providers there are, the more times you have that option, but also the way you build things. You don’t want to make these things mission critical, really I suppose, but it’s like Unix pipes. You have lots of these different sort of things going on at once.
But this idea of relying on other services isn’t anything new. Unless you’re hosting your site yourself, you’re probably relying on a third-party hosting provider. Your stats package is probably sitting off somewhere, you’re relying on some other third party for that. So this idea of relying on third parties for, sometimes, very mission-critical stuff, isn’t really new. Most of us rely on third parties for email, which is about as critical as it gets. This idea of trust and trusting data providers isn’t that new, really, it’s just taking it to the next level.
Patrick Lauke: Hi, Jeremy. Just on the flakiness of services, one thing that I’ve seen that I’ve been doing on my own site is, it’s sometimes a good idea to do some caching so that you don’t always try to hit the latest live data but just mitigating it by, I don’t know, if you can afford to have ten, 15 minute stale data, that kind of helps overcome kind of initial problems if the server just happens to be down or being kind of overly busy.
Jeremy Keith: Yes, that’s the savvy thing to do. I’m not tech-savvy enough to do that. Glenn, when he built the backnetwork, he is tech savvy, so he’s not calling Flickr every time, he’s set a polling time which, a few weeks ago, he was just polling Flickr once a day. As it comes up to d.Construct he’s polling more and more often, so as he gets closer to the live feed on the day of d.Construct, but yes, caching is generally a good idea. If I was smart I would do that more but I’m not really clever enough.
Jenifer Hanen: Thanks, Jeremy. I’ve heard that, no offense to him, Tantek speak on this at least three times over the last three and half years and you just summed up, especially the microformats, in a way that makes sense. Now can I request that the two of you speak together in the future so that way you will give the broad, good background and then he can give the detail?
Jeremy Keith: Well, we actually did at South By Southwest last year. Tantek talked about microformats and gave the whole background of them, and then he had a few of us come up and do implementations so Norm was there and he showed off Yahoo! Europe using hReviews and I went up and showed all the parties in Austin, because that was what I had used microformats for.
I started using microformats everywhere so a lot of those mashups you would have seen using Google Maps, the parties in Austin, the d.Construct location page, all of that’s marked up in microformats. I’d like to say that makes your web site a API straight away. The schedule for d.Construct, that was marked up in hCalendar. You would have seen a link in the sidebar to a third-party service which is hosted on Technorati, which will turn the hCalendar into an actual iCal that you can subscribe to, and then you can put that on your mobile phone. If the web page gets updated, the calendar will get updated as well. You get to take this data with you. You don’t have to duplicate the data. It’s just so easy to do; I’ve just started doing it automatically now. I want to try to turn as many of my websites as possible into APIs. The easiest way to do that is by providing microformats.
Paul and Simon made a great job of selling the benefits of providing an API and of having an API if you are a company, but you have to convince people, as someone mentioned, to do this, to put the money and effort and, and it can be a lot of hassle. Microformats are a nice starting point. If you just convince people, and you don’t even have to tell them, just throw in a few class names. It’s as simple as that. Straight away, you’ve allowed access to your data.
That’s what Norm did at Yahoo! Europe; he just threw in a few class names on Yahoo! Local, I think it was, and suddenly there was thousands and thousands and thousands of hReviews, hCards and all sorts of stuff on the web. Anybody could go out there with a parser and use that data. It was essentially a brand new API the next morning. So, it’s a good way to start. APIs can be intimidating, and microformats are a nice sort of way of getting in the door of the API mentality, this idea of opening up my data.
We’ve got one way down in the front. Paul, down here in the second row.
Audience member: Hi there. I’m the editor of a big cultural website called the 24 Hour Museum. We’ve done RSS for four years now. It’s a fantastic part — it’s the bloodstream that makes everything work and join together. We’re really fascinated with APIs and web services, and we’ve got a fantastic live database, which is added to and kept live all the time by museums, galleries and heritage sites all over Britain. What worries me is if we are looking to build an API interface, if it takes a six months to put together, how long have we got of good, solid, reliable life working with that API interface and those standards before things move on again? This is what may be holding back the big cultural websites, government funders and this sort of stuff. This is what’s holding us back.
Jeremy Keith: Did everyone hear the question? How do you start building API? It could take six months, it’s what’s holding people back, it seems like such a complicated thing to do, and in six month’s time it might be out of date, then you have to go to the next version of the API.
I would say you already have an API because you’re providing RSS feeds. RSS is XML; it’s a RESTful interface onto your data. You probably have lists of things. If you can provide an RSS equivalent for everything on your site, that’s wonderful. A lot of sites do this already: Flickr, Upcoming, all of these sites.
Pretty much anything you can get in a list, you can get as an RSS feed: my latest pictures, latest pictures from my friends, my latest events, links with a certain tag. As well as being able to get that on the website or through the official API, which is more complex, there is always an RSS feed for this stuff because it’s generally pretty easy to put an RSS feed together. It’s always got the same structure, an item with a description, title and a URL, that’s the important bit.
Once you’ve got that, you already have an API because people can just grab that URL and parse it because it’s XML and do what they want with it. So, providing RSS is an API. If you also use microformats on the site and you already have listings — you can use hListings, hReviews, all these little bits of data — then if you don’t have the budget or the time to put into an API, that’s where you should start, just doing small stuff. It’s pretty simple. RSS is a great way to start because RSS is an API. It’s a restful interface onto data. So RSS, very cool stuff, Atom as well, I’m not going to get into a flame war.
Do we have time for another question? I think we do.
Suw Charman: Hi. Actually, it’s not a question; I hope you’ll forgive me. I was hoping to do a little Birds Of a Feather session for the Open Rights Group. Basically, if everyone wants to go grab and lunch, then we will also meet up in the park. Then maybe we can heckle your microformats.
Jeremy Keith: I think maybe a Jets versus the Sharks sort of situation will be good. We can face off with each other. That’s what we’ll do. So, Open Rights Group meeting in the park, and microformats meeting in the park as well. Right back there.
I’ve ordered some sandwiches; I hope they have shown up. They’re here? Great, sandwiches are here, but they’re micro sandwiches so they’re not going to go very far! You might want to grab some sushi down the road or some sandwiches from a nearby shop. I’ll head over to the park now and we’ll get things started. Thanks for the questions; great stuff!
Friday, June 2nd, 2006
Web 2.0. Love the term or hate it, you’ve certainly heard it. Even if you’re a hardened cynic and you pride yourself on not drinking Tim O’Reilly’s koolaid, it’s hard to deny that something is going on: something new, something that is just the start of a brave new world 2.0.
The theme of this year’s Reboot is renaissance. It doesn’t take much of a stretch to compare that term with the ubiquitous “Web 2.0”.
The common perception of 15th century Northern Italy is to view it as the birthplace of a whole new movement in art and culture: a Culture 2.0, if you will. We tend to think of the Renaissance as an almost revolutionary movement, sweeping aside the old-school 1.0 dark ages.
But the Renaissance didn’t come out of nowhere. The word itself means rebirth, not birth. The movers and shakers of the Renaissance — the analogerati of Florence — weren’t trying to make a break with the past. They were trying to get back to their roots. At its heart, the Renaissance was a very conservative movement with an emphasis on reviving and preserving classical ideas. By classical, I mean Greek and Roman. There is a direct line of descent from the Acropolis in Athens to the beautiful buildings built in Copenhagen during the Danish Renaissance. The building blocks of the Renaissance were centuries-old ideas about mathematics, aesthetics, and science.
There is a lesson for us there. With all this talk of a Web 2.0, there’s a danger that we as web developers, whilst looking to the future, are forgetting our past. In our haste to forge a new kind of World Wide Web, we run the risk of destroying the fundamental building blocks that helped create the Web that we fell in love with in the first place.
I don’t intend to run through all the building blocks that form the foundation of the Web. Each one deserves its own praise. HTTP, for example, the protocol that enables the flow of pages on the Web, is worthy of its own love letter.
I’d like to focus on one very small, very simple, very beautiful building block: the hyperlink.
The hyperlink is an amazing solution to an old problem. That problem is classification.
The Garden of Forking Paths
Language is the most powerful tool ever used by man. Together with its offspring writing, language enables us to document things, ideas, and experiences. I can translate a physical object into a piece of information that can be later retrieved, not only by myself, but by anyone. But there are economies of scale with this kind of information storage and retrieval. The physical world is a very, very big place filled with a multitude of things bright and beautiful; creatures great and small. If we could use the gift of language to store and retrieve information on everything in the physical world, right down to the microscopic level, the result would be transcendental.
To see a world in a grain of sand And heaven in a wild flower
The first person to seriously tackle the task of cataloguing the world was born after the Renaissance. Bishop John Wilkins lived in England in the 17th century. He was no stranger to attempting the seemingly impossible. He proposed interplanetary travel three centuries before the invention of powered flight. He is best remembered for his 1668 work, An Essay towards a Real Character and a Philosophical Language.
The gist of Wilkins’s essay is explained by Jorge Luis Borges in El idioma analítico de John Wilkins (The Analytic Language of John Wilkins).
He divided the universe in forty categories or classes, these being further subdivided into differences, which was then subdivided into species. He assigned to each class a monosyllable of two letters; to each difference, a consonant; to each species, a vowel. For example: de, which means an element; deb, the first of the elements, fire; deba, a part of the element fire, a flame.
You can find more delvings into Borges’s essay on a weblog by Matt Webb; the fittingly named interconnected.org.
The problem with Wilkins’s approach will be obvious to anyone who has ever designed a relational database. Wilkins was attempting to create a one to one relationship between words and things. Apart from the sheer size of the task he was attempting, Wilkins’s rigidity meant that his task was doomed to fail.
Still, Wilkins’s endeavour was a noble one at heart. One of his contemporaries recognised the value and scope of what Wilkins was attempting.
Gottfried Wilhelm von Leibniz possessed one of the finest minds of his, or any other, generation. It’s a shame that his talent has been overshadowed by the spat between Newton and himself caused by their simultaneous independent invention of calculus.
Leibniz wanted to create an encyclopaedia of human knowledge that was free from the restrictions of strict hierarchies or categories. He recognised that concepts and notions could be approached from different viewpoints. His approach was more network-like with its many to many relationships.
Where Wilkins associated concepts with sounds, Leibniz attempted to associate concepts with symbols. But he didn’t stop there. Instead of just creating a static catalogue of symbols, Leibniz wanted to perform calculations on these symbols. Because the symbols correlate to real-world concepts, this would make anything calculable. Leibniz believed that through a sort of algebra of logic, a theoretical machine could compute and answer any question. He called this machine the Calculus ratiocinator. The idea is a forerunner of Turing’s universal machine.
The general idea of a computing machine is nothing but a mechanisation of Leibniz’s calculus ratiocinator. - Norbert Wiener, Cybernetics, 1948
Let me tell you about another theoretical device. It’s called the memex (short for “memory extender”). This device was proposed by Vannevar Bush in 1945 in an article in the The Atlantic Monthly called As We May Think. Bush described the memex as being electronically linked to a library of microfilm. The device, contained within a desk, would be capable of following cross-references between books and films. This almost sounds like hypertext.
But there may be a form of proto-hypertext that precedes the memex.
Shite and onions
In recent years the works of James Joyce have been revisited and re-examined through the prism of hypertext. Ulysses and Finnegan’s Wake make sense when viewed, not linearly, but as a network of interconnected ideas. Marshall McLuhan was heavily inspired by Joyce’s communication technology. The medium was very much the message.
For most of us, Finnegan’s Wake remains an impenetrable book, at least in the narrative sense. It might make more sense to us if we suffered from a medical condition called apophenia: the perception of connections and meaningfulness in unrelated things.
This isn’t necessarily an affliction. In his book Pattern Recognition, William Gibson describes an apopheniac cool-hunter hired by marketers to detect the presence of Gladwellian tipping points in a product’s future.
Apophenia is a boon for conspiracy theorists. If you’re fond of a good conspiracy theory, I recommend staying away from the linear and predictable Da Vinci Code. For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.
…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…
For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.
Enquire Within Upon Everything
There was no magical “Eureka!” moment in the invention of the Web. It was practical problem solving, not divine revelation that resulted in the building blocks of Universal Resource Identifiers (URIs), the Hypertext Transfer Protocol (HTTP), and the Hypertext Markup Language (HTML). Berners-Lee’s proposal built on top of the work already done by Ted Nelson and Douglas Engelbart, the inventors of the first true hypertext system in 1965.
US Patent Number 4,873,662
If there was anything revolutionary about the World Wide Web, it was the fact that it was not patented, instead being declared free for all to use. That spirit of scientific sharing clearly didn’t rub off on British Telecom who attempted to enforce a patent which they claimed gave them intellectual property rights over the concept of hyperlinks. The claim was, fortunately, laughed out of court.
Model View Controller
The World Wide Web is the ultimate MVC framework. URIs are the models controlled by HTTP and viewed through HTML. While the view may seem like the least significant component, it is the simplicity of HTML that is responsible for the explosive growth of the Web.
There was nothing new about markup languages. Standard Generalised Markup Language had been around for years. Before that, red pens allowed editors to literally mark up text to indicate meaning.
Like SGML, HTML used tags — delineated with angle brackets — to nest parts of a document in descriptive containers called elements. The P element can be used to describe a paragraph, the H1 element describes a level one heading, and so on.
Alpha and Omega
The shortest element is the most powerful. A stands for anchor. Nestled within the anchor element is the href attribute. This attribute, short for hypertext reference, is the conduit through which the dreams of Leibniz, Joyce, and a thousand conspiracy theorists can finally be realised.
The vision I have for the Web is about anything being potentially connected with anything.
Anybody could create anchors containing hypertext references. Just about everybody did. The Web grew exponentially in size and popularity. With every new web page and every hyperlink, the expanding Web became a more valuable and powerful aggregate resource.
This power was harnessed by Sergey Brin and Lawrence Page. The concept behind their PageRank algorithm is simple: links are a vote of confidence. If a lot of links point to the same page, that page is highly regarded. By combining this idea with traditional page analysis, they created the best search engine on the Web; Google.
In order to measure the PageRank of everything on the Web, the googlebot spider was unleashed. In some ways, the googlebot is like any other user agent: it visits web pages and follows links. It’s also possible to see the googlebot as a kind of quantum device.
When you or I visit a web page that has, say, ten links, there are two theories about what happens. According to superposition, the next page we visit exists only as a probability. Not until we make a decision and click on a link does the page resolve into one of the ten possibilities.
The alternative view is the many worlds interpretation. According to this theory, visiting a page with ten links would cause the universe itself to branch into ten different universes. You or I will remain in the universe that matches the link we clicked. But the googlebot is different: it follows all ten links at once, spidering alternate worlds.
I have first hand experience of Google’s stockpile of parallel universes. To celebrate Talk Like a Pirate Day, I created a simple server-side script. You can pass in the URL of a web page and the script will display the contents interspersed with choice pirate phrases such as “arr!”, “shiver me timbers!”, and “blow me down!”. The script also rewrites any hrefs in the page so that the pages they point to are also run through the pirate-speak transmogrifier.
It was amusing. It even appeared on Metafilter. The problems started later on. I began to get irate emails, even phone calls, from website owners demanding that I remove their files from my server. I was even threatened with the Digital Millennium Copyright Act. I was fielding angry emails from people all over the world in charge of completely disparate websites.
The googlebot had landed on my Talk Like a Pirate page (perhaps it followed a link from Metafilter). Then it began to spider. It never stopped. Somewhere within the Googleplex there is a complete one to one scale model of the World Wide Web that’s written entirely in pirate-speak.
Now when site owners do a search for their websites to check their ranking, the pirate facsimile often appears before the original. I can’t help it if my Googlejuice is better than theirs.
I began to feel remorse when I heard from the proprietor of a spinal surgery clinic in Florida who told me that potential customers were being scared away by messages detailing “professional treatment, me hearties!”
I have since added a robots.txt file but it can be a long time between googledances. Parallel universes don’t just disappear overnight. I guess the googlebot isn’t a quantum device after all because, it seems, it can’t be everywhere at once. That’s where it falls down. How is it supposed to deal with websites that are updated frequently?
Blog, short for weblog…
Trying to define what a blog is can be a slippery task. Most definitions include the words “online journal”. I’ve been told that my online journal isn’t a blog because I don’t have comments enabled. I must have missed the memo.
What really makes a blog a blog isn’t the addition of comments or the fact that it’s an online journal. The defining characteristic of a blog is the presence of permalinks. Permalink, a portmanteau word from permanent and link, should be a tautology. All links should be permanent.
Permalinks, and by extension, blogs, encourage linking. Instead of simply saying “here’s my opinion”, blogs allow us to say “here’s a permanent linkable address for my opinion.”
The earlist blogs were link logs, places you could visit to find links that somebody thinks are worth visiting. Even now, I find that the best blog posts are often ones that point out the connections between seemingly separate links. Bloggers are natural apopheniacs; conspiracy theorists who can back up their claims not just with references to their sources but with hypertext references… hrefs.
Even though all blogs have permalinks, there’s something inherently transient in the nature of blogging. It’s a tired cliché but the aggregate web of blogs really is like a conversation. The googlebot can’t hope to follow all the links spawned by all these voices speaking at once. Technorati does okay though.
Technorati is also the breeding ground for some infectious little ideas called microformats. These microformats embrace and extend the Hypertext Markup Language. Making use of the little-known rel attribute, the anchor element can be made even more powerful. In XFN, XHTML Friends Network, the addition of rel equals friend, colleague, met, and other descriptors adds extra semantic weight to a link (as yet there is no Enemies Network, but Brian Suda and I are working on a draft specification).
The bonus semantics offered by microformats can be harvested and collated to form a clearer picture of the connections that were previously less defined.
Microformats are the nanotechnology for building a semantic web.
That’s lowercase semantic web. The uppercase Semantic Web still lies in our future. Another theoretical future technology is XHTML 2, wherein any element whatsoever can have a hypertext reference.
Perhaps we aren’t worthy of such a bounty of hrefs. Right now hrefs exist only in the anchor element and yet still we manage to abuse them.
a href="#" onclick="..."
Using a pointless empty internal page reference is almost as bad. If you can’t provide a valid resource for the href value, don’t use the anchor element. Anchors are for links. Don’t treat them as empty husks upon which you hang some cool Ajaxian behaviour. Respect the link.
If we value and cherish the links of today, who knows what the future may bring?
Maybe Bruce Sterling is right. Maybe we’ll have an internet of things. Spimes, blogjects, thinglinks… whatever the individual resources are called, they’ll have to be linkable: hyperlinked addressable objects existing in our regular, non hyper-, space.
It sounds like an exciting future. We live in an equally exciting present.
We have all come together here in Copenhagen because of how much we love the World Wide Web. I bet every one of you has a story to tell about the first time you “got” the Web. Remember that thrill? Remember the realisation that you were interacting with something that was potentially neverending; a borderless labyrinth of information, all interconnected through the beautiful simplicity of the hyperlink. We may have grown accustomed to this miracle but that doesn’t make it any less wondrous.
We are storytellers, no longer huddled around separate campfires, we now sit around a virtual hearth where we are warmed by the interweaving tales told by our brothers and sisters. Everyone is connected to everyone else by just six degrees of separation. Thanks to the hyperlink, we can find those connections and make them tangible.
The dream of hypertext has become a reality.