Trial, Triumph, and the Art of the Possible: The Remarkable Story Behind Beethoven’s “Ode to Joy” – The Marginalian
An ode to an ode. Both of them beautiful.
An ode to an ode. Both of them beautiful.
But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.
We should celebrate our hobbies for the joy-giving activities they are, and recognize that they don’t need to become anything bigger than that. And of course that’s not to say they those hobbies can’t turn into something bigger — it’s incredible when your passions and your occupation overlap — but it should be because you want to rather than that you feel pressured to. Not every activity you do needs to become a big official thing.
Have fun with this little machine, tweaking the parameters for generating a Joy Division/Jocelyn Bell-Burnell data visualisation.
The interface is quite delightful!
It me:
Writing comes naturally to me when I’m expressing myself on my own site, with no outside assignment and no deadline except my own sense of urgency about an idea. It’s easy when I’m crafting a brief text message or tweet. Or a letter to a friend.
But give me a writing assignment and a deadline, and I’m stuck. Paralysis, avoidance, a dissatisfaction with myself and the assignment—all the usual hobgoblins spring immediately to life.
As the commercial viability of the web grew, we saw more and more users become consumers and not creators. Many consumers see websites as black boxes full of magic that they could never understand. Because of this, they would never think to try to create something.
This is a shame. We lost a little piece of the magic of the web when this culture came about.
A call to action to create a fan site about something you love. It would be an unmonetisable enthusiasm. But it’s still worth doing:
- The act of creation itself is fun!
- Sharing something you love with the world is worthwhile.
- You’ll learn something.
So here’s the challenge:
- Create a Fan Site.
- Help someone create a Fan Site.
- Create a webring.
From Frederik Pohl’s 1966 novel:
The remote-access computer transponder called the “joymaker” is your most valuable single possession in your new life. If you can imagine a combination of telephone, credit card, alarm clock, pocket bar, reference library, and full-time secretary, you will have sketched some of the functions provided by your joymaker.
Essentially, it is a transponder connecting you with the central computing facilities of the city in which you reside on a shared-time, self-programming basis.
This looks like fun: it’s like a clever slot machine for pairing typefaces.
I thought the “machine learning” angle sounded like marketing bullshit, but it’s genuinely fascinating.
I’ve made no secret of my admiration of Jocelyn Bell Burnell, and how Peter Saville’s iconic cover design for Joy Division’s Unknown Pleasures always reminds of her.
There are many, many memetic variations of that design.
I assumed that somebody somewhere at some time must have made a suitable tribute to the discover of those pulses, but I’ve never come across any Jocelyn-themed variation of the Joy Division album art.
The test order I did just showed up, and it’s looking pretty nice (although be warned that the sizes run small—I ordered a large, and I probably should’ve gone for extra large). If your music/radio-astronomy Venn diagram overlaps like mine, then you too might enjoy being the proud bearer of this wearable tribute to Dame Jocelyn Bell Burnell.
This is so cool! The logs of the Indie Web Camp IRC channel visualised as a series of sparklines in the style of Joy Division/Jocelyn Bell Burnell.
I’m in Seattle. An Event Apart just wrapped up here and it was excellent as always. The venue was great and the audience even greater so I was able to thoroughly enjoy myself when it was time for me to give my talk.
I’m going to hang out here for another few days before it’s time for the long flight back to the UK. The flight over was a four-film affair—that’s how I measure the duration of airplane journeys. I watched:
I was very glad that I watched Joy after three back-to-back Bechdel failures. Spectre in particular seems to have been written by a teenage boy, and I couldn’t get past the way that the The Big Short used women as narrative props.
I did enjoy Steve Jobs. No surprise there—I enjoy most of Danny Boyle’s films. But there was a moment that took me out of the narrative flow…
The middle portion of the film centres around the launch of the NeXT cube. In one scene, Michael Fassbender’s Jobs refers to another character as “Rain Man”. I immediately started to wonder if that was an anachronistic comment. “When was Rain Man released?” I thought to myself.
It turns out that Rain Man was released in 1988 and the NeXT introduction was also in 1988 but according to IMDB, Rain Man was released in December …and the NeXT introduction was in October.
The jig is up, Sorkin!
Peter Saville talks about the enduring appeal of his cover for Unknown Pleasures.
I like to think of all the variations and mashups as not just tributes to Joy Division, but tributes to Jocelyn Bell Burnell too.
Testing James Joyce: this is like the Seven Bridges of Königsberg puzzle but with Guinness.
Hooky never looked so good.
Peter Saville is releasing some of his fonts for free. I'm grabbing the beautiful serif typeface used on the front of Joy Division's "Closer"; it's gorgeous.
The song Lay Low by My Morning Jacket is the eighth track on the album Z. When the song starts, it seems like your typical My Morning Jacket song, ‘though perhaps a bit more upbeat than most. For the first few minutes, Jim James sings away in his usual style.
At precisely three minutes and three seconds, the vocals cease and the purely instrumental portion of the song begins. As one guitar continues to play the melody line, a second guitar begins its solo.
It starts like something from Wayne’s World: a cheesy little figure tapped out quickly on the fretboard. But then it begins to soar. Far from being cheesy, it quickly becomes clear that what I’m hearing is the sound of joy articulated through the manipulation of steel strings stretched over a piece of wood, amplified by electricity.
As the lead guitar settles into a repeated motif, the guitar that was previously maintaining the melody line switches over. At exactly three minutes and thirty seconds, it starts repeating a mantra of notes that are infectiously simple.
For a short while, the two guitars play their separate parts until, at three minutes and thirty three seconds, they meet. The mantra, the riff — call it what you will — is now being played in unison, raising my spirits and pushing the song forward.
The guitars remain in unison until just after four minutes into the song. Now they begin to really let loose, each soaring in its own direction as the rest of the band increase the intensity of the backing.
Two seconds before the five minute mark, the guitar parts are once again reunited, but this time in harmony rather than unison. At five minutes and nineteen seconds, a piano — that was always there but I just hadn’t noticed until now — begins to pick out a delicate tinkling melody in a high register. It sounds impossibly fragile surrounded by a whirlwind of guitars, drums, and bass, but it cuts right through. And it is beautiful.
At five minutes and twenty six seconds, as the piano continues to play, one of the guitars drops down low and starts growling out its solo. From there, everything tumbles inevitably to the end of the song.
The band stops playing at five minutes and fifty seconds, but we’re given another twenty seconds to hear the notes fade to silence. The instrumental break has lasted three minutes. It is the most uplifting and joyous three minutes that has ever been captured in a recording studio.
Now, divine air! now is his soul ravish’d! Is it not strange that sheep’s guts should hale souls out of men’s bodies?
A presentation I gave at dConstruct 2006 in Brighton.
OK. Yeah, I’m not going to be talking about big business benefits or you know business models or anything like that. This is going to be a much more sort of a personal look at APIs. I am not from Yahoo!, I’m not from Amazon. I’m not from a big company. I’m just a lone developer and I want to talk about my experiences. Specifically, I wanted to talk about the fun, the happiness that you get from APIs because you can get a lot of joy out of it and mostly though, I just want to talk about me. So that’s what I will be doing for the next 45 minutes.
So joy, where do we get joy from? Ignoring the obvious answers to that question, let’s focus specifically on technology and where does this joy in technology intersect and I am going to cast my mind back to the first time I remember getting joy out of technology. Does anybody else? Did anyone else have a ZX81? Yeah, OK, cool. 1K, 1K of memory was what you had to work with, with this. You had to be very creative. But it was a lot of fun. Everyone has done it 10 print, 20 goto 10
. OK, we’ve all done it. It’s just fun. It’s fun playing around. Later on, we’ve all got our Spectrums and we started playing adventure games and stuff like that (Thorin sings of Gold).
Many years later I left computers aside. I sort of went off wandering, hitchhiking and busing across Europe for a while. Many years later I came across the Web and I didn’t get the Web at first till I came across this site Fray.com which was years ahead of its time. I was surfing around. I was looking at all these web sites and I was “I don’t get it”. I don’t get why the Web is big and I came across Fray.com and I got it because I had an emotional reaction. What it was, was stories. People were telling stories and the design was always driven by the story. Story drives design which is something I’ve held onto, until this day. Fray made me want to learn HTML and that was where I found joy, which was with HTML. This idea of marking stuff up with tags, I loved it. I really got into it and I decided I wanted to do this. I wanted to play around with this.
So at the time I was living in Germany and playing in a band, and we decided we should have one of these new fangled website things. So I said, “I’ll try it. I’ll do it. I want to get into this.” So I did. This is my very first website. I am bearing my soul in showing this. It’s not too bad. It’s not a Geocities page at least, right? But this is 1998, 1998 I built this so it is going back a ways. It was really good fun. I really enjoyed it and I picked up skills. So I was picking up some Photoshop skills, some HTML skills and I found that a lot of the skills I picked up were always on personal projects like this. I made my own website adactio.com. This was what it used to look at. It was full of DHTML stuff whizzing around. Remember in the 90s, the kind of stuff we got up to? This was the epitome of gratuitous effects. It was terrible really but I learned JavaScript for what it’s worth. At the time that sort of DHTML stuff wasn’t much fun to do. There wasn’t much fun to be had from DHTML but I was learning. I was learning new stuff by doing these personal projects.
And then I created this site and this is a community site built around Irish music and here it turned into a big site with all these tunes and albums and discussions. So my skill that I needed to learn here was database skills and I was learning PHP and MySQL, all this stuff. So I was having fun and picking up these new skills but there is a particular part to this site I want to draw your attention to, that’s down at the end of the navigation where it says shop. Because on this site you can through to the shop, there’s a search thing at the top and you search Amazon from this site, from the thesession.org. So I’ve typed in the name of a band. I get the results. I click through and I get to display all that information in my site, marked up the way I wanted, looking the way I wanted and that’s all possible because of Amazon’s API, exactly what Jeff was talking about earlier. So this was something I just wanted to mess around with. I knew that it would add value to my site. It was probably going to be some fun and it was relatively painless because of this documentation that Amazon provides. It provides pretty much everything you need to know. There is documentation but more importantly, there are examples, because that’s how I learn stuff. I looked at examples and I picked apart the code. I mean, view source, OK, that’s how I learned HTML. That’s how I learned JavaScript, CSS. Here’s what you need to do to put together a query to get some information in an XML file and that’s pretty cool. What’s interesting though is down at the bottom you’ll see that there is general request parameters and one of those is XSL. It’s one of the coolest things that Amazon’s web services allow you to do. They allow you to specify an XSL file then you can mash up the XML and XSL files and output it the way you want to do it.
Let me just show you that. Warning: code ahead. But it’s sort of pseudo-code and it’s not going to get very complex, it’s more just the idea of it. How do you use these web services — and really nitty gritty — how do you even start to use them? You need some kind of server-side language to do this. I was using PHP, so that’s what I’m using here. So first begin with the URL — the URL to the particular method on the Amazon web services. Start with a URL and you start adding all of these parameters to it, so my Amazon associates ID so I will make some money off of these sales. My API key. Most API’s make you sign up first — they give you a unique key and you have to use that in every request you make, so that’s mine: don’t use it. The important one in red there: I’m pointing to an XSL file. I’m saying don’t send me back raw XML. Before you send the result back to me run it thorough that XSL file. So I put all of those parameters onto the URL and finally I just spit out what’s left. So I’m using very simple PHP functions to do that.
Here’s what the Amazon web service would send back, roughly. This is a very simplified version. But it’s an XML file full of stuff, and some of this stuff in here is the information I want to get out. So there’s the name of the band, the name of the album and that’s what I want to get out and then I want to mark it up in markup. So, I run that through an XSLT file to do the transformation. XSL is an interesting one because you can see it’s tag based. It’s like XML. It’s a declarative language, but it’s also got sort of programmatic aspects to it because you can select things, you can do loops. It’s a strange mixture of programming and mark up. A lot of people have a love/hate relationship with it. It’s actually pretty powerful and it’s a good skill to have. And the nice thing is that you can mix it up with regular mark up. So I’m mixing it up with HTML. And it’s like a template language, it will insert the relevant data in there and the result is I get mark up with the empty spots filled in with the data I wanted and then I can display that on my site like that. And that was a lot of fun. That was actually joyful. I got a lot of fun out of doing that. And, you know, I make a bit of money when people click through, but that wasn’t really my reason for doing it.
This is all possible because of this idea called REST, which supposedly stands for Representational State Transfer. And it’s more an idea of how to build these services together. There’s a number of different sorts of thinking behind what make something restful, but basically what it comes down to is you have these resources that are uniquely addressable, usually through a URL. And it’s stateless. There is no keeping track of who is logged in and whose not. And if this kind of thing looks familiar — this idea of stateless, uniquely addressable resources — that’s because that’s the Web. That’s how the World Wide Web was built, using a RESTful method, methodology, really. It’s pretty easy, because once you get your head around that you realize all you do is put together a URL and grab that and there’s your information from a third party service.
The other idea, the sort of opposite to REST is something called SOAP, which is an acronym S.O.A.P. and I think we all know what that stands for. I’m not going to go into much detail about SOAP because it is far too complex, unnecessarily complex really. I have played around with SOAP. I had to because one of the other things I wanted to put on that site was Google search. I wanted people to be able to do everything from my site. I didn’t want them to have to go off to Google just to do a search. And you can use Google’s API to get searches on your own site, but it’s only SOAP, you have to use SOAP. And I’m not even going into the technical details. Basically, you have to create an XML file, send that off, then you get the XML file. Way too complex, not much joy to be had. It would be nice if Google would provide a RESTful interface. I did do it and, you know, I got this interface on my site and into Google’s search facility, which was nice, but not much fun.
Where I did get joy was when I started using Flickr. I love Flickr. I really like Flickr. I started using it when I came back from South by Southwest last year. Everybody was using Flickr and I decided I needed to check this out. I started taking pictures, putting them up there. I was reluctant at first because I already had image galleries on my own site and I didn’t really didn’t want a third party hosting my images, but they did such a good job I decided to go with them. Then I started exploring other services to host my stuff. So, Delicious for hosting bookmarks and then I started using Upcoming for keeping track of events and stuff like this. So a lot of my data started going outwards with other services providing the infrastructure. So I have my own website and all around that, all over the web there was all these bits of me. So, Amazon would have my wish list and Flickr is where I’m keeping my pictures. Del.icio.us, that’s where my bookmarks are and Upcoming, that’s where all of my events are. So, they’re all over the place, scattered over the web and I wanted to be able to draw them into that central location which was my own site and that was possible because of APIs. Flickr provides loads and loads of methods so that you can do just about anything. Del.icio.us also provides an API, a little trickier it must be said because it requires a secure authentication thing but you can still get all your bookmarks. Upcoming, you can also get events and stuff, again, not quite as friendly as Flickr’s but pretty powerful.
And here’s how they all pretty much work. Again, you begin with the URL. Now none of these are providing XSLT transformations, which is a bit of a shame and it would be nice if they did. So I have to parse them some other way. So I begin with a URL and in this case I decided to use PHP5’s DOM functions because I was familiar with the DOM already from client side DOM Scripting. So I load in the document and it turns into a DOM structure and then I can just go through it using the usual DOM methods that I am used to from JavaScript, you know, getElementsByTagName
— that’s one of the nice things about the DOM, you get to recycle your knowledge — and then I can spit it out onto the screen however I want.
My original idea was I would bring this into my blog. I have a side bar down the side and I thought Oh just pull in, you know, what lots of other people are doing, a couple of Flickr pics and the next few events I will be going to on Upcoming and maybe a couple of Del.icio.us links but it grew and grew and grew into this sort of monster where I ended up creating a whole sub domain, Adactio Elsewhere to hold all of this stuff.
So I didn’t just put in my Flickr picks. I put in my Flickr pics, the last few Flickr pics from my contacts, also the list of all my contacts and you don’t just have to look at my pictures, you could click on anybody’s name and then you will see their pictures and their contact’s pictures which is one of the funnest things about Flickr is following these trails. Here are my pictures. This person’s a friend, here’s this person’s pictures, here’s the pictures of their friend and you end up going into through this whole network of pictures. Great fun.
Amazon, as well as doing a search that I was doing on the other site, I already had the code for that so I just threw it in there, also having my wish list here so I could have easy access to it. Delicious, pulling in the last links. Also being able to search my links straight from this one place and Upcoming, just grab the newest events. All on one page, all on an AJAXy sort of interface. That was something else I was learning about at the time. So pretty good fun.
Let’s see how they stacked up. Let’s rate the APIs of these various services.
I am going to rate them by the amount of power you get from them, documentation and just plain joy: that indefinable something. And so, first of all: the power. Flickr provides APIs for just about everything. I think the only thing you can’t do with the Flickr API is get the comments. You can get the number of comments on a photograph but you can’t get the actual comments on the photograph but pretty much everything else you can do yourself. You can create a one to one copy of Flickr almost just using their APIs and this is probably because they use the APIs themselves. They eat their own dog food.
Now Amazon is very powerful as well. Again, you can create an entire Amazon of your own pretty much and put a nicer interface on it, put nicer markup in there. [Laughter] At some point you do have to go out and go to the checkout process. You can’t quite go all the way through but pretty close.
Del.icio.us and Upcoming, yeah a little tricky, I’ve got to say. They make you jump through a lot of hoops. You can’t just grab all the information about an event, you have to grab the event id, then you have to grab the location id and then do different methods for each one of those to get the details. They make you jump through hoops, Del.icio.us, the thing that is kind of tricky, you have a lot of access to your own bookmarks but not so much access to other people’s. So there’s kind of limited room for mashups involving other people’s bookmarks there.
Documentation, let’s see how they did. Pretty good. Both Flickr and Amazon scored well. They have good online documentation on their sites but almost more importantly they have good communities. There’s mailing lists and of course there’s a Flickr group. A good place to go and discuss is the Flickr mailing list. The Flickr developers themselves hang out there so you can get answers pretty quickly from people who really know their stuff. Amazon, there’s a lot of documentation on the site. It’s improving all the time. It’s something they are putting a lot of effort into. Del.icio.us and Upcoming less so but you have to remember where they came from. They were small startups and they were only bought up by Yahoo! fairly recently. So basically a couple of guys in their bedroom doing each of those services so their documentation is understandably a little thinner on the ground.
So, let’s see, the final scores please. How did the APIs do? The winner there is Flickr for joy and it is partly down to the subject matter. There is just so much fun messing around with photographs. Amazon is good. Amazon came second but you are dealing with products and shopping and stuff like that and there is a lot of fun to be had there but there is just something sexier about photographs, I think, so Flickr has a bit an unfair advantage and the others did pretty well. So there’s the score card. Those are just four service. There are hundreds if not thousands more APIs out there. This site keeps track of all the different services that are out there being provided by all these providers, from the very big to the very small.
There are a lot of APIs out there and what this cornucopia of APIs creates is a parallel Web. Instead of being a Web of documents, it’s like a Web of data. The kind of data you get there is identity for one; my Flickr photographs, my Del.icio.us things, all of these things that are mine. The ultimate identity API would be where you can actually safely store user names, passwords, credit cards with a third party and have that trusted, but that’s still future talk but right now, my things. Events obviously, so upcoming for that, eventful, there are a whole bunch of services based around events. Relationships, that’s where all the social networking stuff comes into it. There’s a lot of social networking sites out there all built pretty much the same way. Most of them have APIs. And here’s one that I haven’t even touched on: location. Location is a really big one and location is kind of like I said Flickr was fun because of the subject matter. Location is really fun because of the subject matter.
Dealing with maps is an awful lot of fun. I remember the joy I had when Google Maps came out. Does anyone remember what it used to be like when you would have to try and browse a map and all you had was Mapquest? I mean these days if I come across a Mapquest image on a website, I try to drag it. I forget that’s it is just an image. I have become so used to Google Maps and all the great maps out there now, I expect to be able to pan around a map. I had so much fun with Google Maps when it came out, not doing anything useful just browsing the world, you know just looking at landmarks. It was very enjoyable.
And it wasn’t long before people started using Google Maps for their own purposes and this is something that gets pulled up again and again, the Chicagocrime.org where Adrian Holovaty mashed up crime statistics for Chicago with Google Maps so you can see which neighborhoods were the most dangerous. Here are the first degree murders in Chicago and you can figure out what neighborhoods you want to stay away from and everyone brings this up as a good example of a mash up. Why I am bringing it up of interest is that this was built before there was an API for Google maps. The interesting thing about Google maps is it’s not to do with sending XML from a server. It’s all JavaScript and with JavaScript, you can view source. Now they did obviously obfuscate everything and there were line breaks stripped out but if you have the time, you can go in and you can view source and you can figure out how Google is doing it and you can create your own mash up, which is exactly what happened here and this is probably what got the Google guys thinking, you know, people can do some pretty cool stuff if you release this API publicly, which they did. So that’s a good example.
I have seem others that I really like and Eric Meyer did this one which is the High Yield Detonation Simulator [Laughter] after he had an argument one evening with somebody at dinner whether New Jersey would get blown off the map if New York got hit by a bomb. So you can put in a location and the amount of kilotons. [Laughter] So this is 100 kiloton bomb dropping on Brighton, dropping actually on the Corn Exchange. It’s OK. The university survives. That’s a nice one.
My absolute favorite mashup with Google Maps is this thing called Overplot where they’ve taken a blog… has anyone heard of Overheard in New York? It’s this blog where people submit little snippets of conversation they overheard on the street. They submit the little snippet but also submit where they heard it, corner of 5th and Broadway or whatever. Well this guy took the entire archive of that site by just going through the Google reader, RSS reader, mashed it up with the maps to create this map of New York with all the conversations mapped onto it. You can zoom in and you see all the conversations on each street corner and you can zoom in and you can start eavesdropping and you get the context. That’s the really great thing about maps. It’s one great thing to read, you know, that this [Laughter] occurred on the street corner. It’s another thing where you can actually see the surrounding area and you will love it. It’s a complete time sink. If you go to this site, be prepared to spend hours [Laughter], I mean hours, looking around. It’s great. That’s the great thing. Context. Context is something that maps do really well.
Something similar to this is something that Gawker did. They put together the Gawker Stalker because they used to have people call in celebrity sightings. “Hey, George Clooney is having lunch at this restaurant downtown.” Well they took all these sightings again and they mashed it up with location. George Clooney hates this site. He really doesn’t like it. So you can see celebrities being stalked all over the place and I decided I wanted to try messing around with this. I wanted to check out the APIs.
So I found an opportunity to mess with Google maps this year at South by Southwest. There was a lot of parties going on. I had gotten together all the parties, written them all up an HTML page, marked them all up and I wanted to mix it up with Google maps so that I could see how far apart these parties were and then I could decide which ones I can go to and which ones I can avoid. So you’d click on a particular place and say that’s where it is right there, right where the big beer logo is and you click on other places and decide well, that party is too far. You get a lot more context with maps. It’s really good fun.
This is an idea I recycled for d.Construct, same thing. I had all this information about places we can go grab some lunch, pubs that have free wi-fi and instead of just providing that in text form, mash it up with a map and then you can see how far away — there, that is how far away the best Japanese restaurant in Brighton is from the Corn Exchange — then you can make a decision based on that.
So like I said, it’s not XML this time. It is JavaScript and you do have to know some JavaScript. It was kind of fun. It was kind of fun and one of the interesting things, this was something Paul mentioned, Google were the first out the door with a mapping API. Yahoo! followed and they very wisely pretty much copied the Google Maps API. In fact, I decided to just to play around. I wanted to re-implement this but using Yahoo! Maps and all I had to do most of the time was to change the letter G to the letter Y and everything just worked. [Laughter] I ended up sticking with Google because this is how the Yahoo! equivalent [Laughter] appears [Laughter]. Yeah, not so much context, [applause] I’m not getting quite the information I need with all of that but I do not I’m not blaming Yahoo! because the mapping situation, as Paul pointed out and Simon, is very complicated, the data providers, it is very annoying which is why it would be great to get those guys out of the picture and we should be providing the data which is something that these guys are doing, OpenStreetMap.org. They are mapping the world from the ground up. Instead of this top down, data is coming from these mapping providers, these guys are going out there and they are mapping the streets and they are having a workshop tomorrow here is Brighton and we will try and map as much of Brighten as we can. It going to be a lot of fun and I recommend everyone come along. They have an API which is great. There is an associated project called Free the Post Code because the Royal Mail has a monopoly on post codes in this country and they charge for this data. But we all know our own post codes. If we can just say, here is my post code, here’s my location, latitude, longitude. If everybody did that, then we could have a free API for post codes which would be great. So, all this data is getting freed. There’s this web of data. People want that web of data to be free.
Parallel to this web of data, there is something else going on. There is the live Web. Data tends to be fairly static, you know Amazon products, Flickr pictures, they’re there, you can go to them anytime. There is a lot of stuff going on the Web now that’s just like drinking through the fire hose. There is just so much going on and mostly through the RSS is the best way of getting access to this stuff because it is so fast, you need some way of keeping track of it.
And the other way of making sense of it is tagging and a site that’s really done the best job of combining these two things together, keeping track of all these conversations, there are lots of sites doing this but Technorati is particularly interesting for keeping track of the live Web and one of the reasons why I find it interesting is the fact that it provides an API. Again, RESTful, you just point your code to the URL. Send it to parameters and you get back XML and you have to parse that XML.
So I decided to use the power of the live web on my own site. I started tagging my own posts a while back so you can see about my own posts and then I added in calls to the APIs for Del.icio.us to get my own links and to Technorati to see what other people are saying about the same tags and then display that in the same page. So this is first of all, who is linking to this site? That’s really useful. It’s like what you need your stats for but here I get it inline. Who is talking about what I’ve just blogged right now, this minute well I can get that information from Technorati and who is using the same tags? In a very meta move, this is who’s tagging with “tagging”. So I can keep track of all of that in one place. And it was good fun. The API is pretty straight forward for Technorati. Sadly, it’s pretty flaky. It must be said. Technorati could do with some new servers, I think, because it tends to drop a fair a bit. That’s just my own personal experience. Everything I am saying is my own personal experience. That was mine with the Technorati API, potentially fun, a little bit flaky, wouldn’t want to rely on it too much.
There is a problem with all of these APIs really, and it is the fact that you do need to know your stuff. It is kind of rocket science. You can’t just get started straight away. Most of all you need to know about XML. Apart from the Google Maps, and the Yahoo! Maps, those are the ones based around JavaScript, it all comes down to XML most of the time and you have to know your way around XML. There are ways of parsing XML like XSLT. That’s a good useful skill to have, like I said, particularly for the Amazon services. If you are going to use the maps you need to know your JavaScript. I mean they make it as painless as possible for you but you still need to know your stuff so you can understand what’s going on and most of the time, you need some server-side language to do the heavy lifting for you.
What that means is that mashing around with APIs and doing all this fun stuff is limited to kind of the alpha geeks, the people who know this stuff and like I said it’s good fun to play around with it but it’s a bit of a shame that everyone can’t join in the fun. Now some people are trying to change that. There are some services out there that are trying to bring APIs to the masses. There is a service called Ning with the idea being that you take existing mashups you clone it, you mess around with it, you do your own thing. I’m not sure how well this works really. I don’t think it is very clear describing how it works and who it is aimed at, but the idea is good, to free APIs up for everyone not just the geek developers.
Another interesting one is called Dapper where you point it at some pages. Let’s say there’s a site that doesn’t have an API, and you want to mess with that site’s data. Well you start surfing that site through that Dapper browser and you say, okay, this is a headline and that’s a description, okay, and on that page that’s a headline, that’s a description. And you show it a few pages and it sort of gets the idea. It says, Oh, okay, I see how this site is built, and then it can construct an API for you to use. Really, it’s like a very clever form of screen scraping. But, that’s what people are going to do anyway. If you don’t provide an API, if they really want the data, they’re going to get the data anyway by screen scraping. So that’s an interesting way of trying to open up every website to have APIs, but I don’t know how well that’s going to work.
So this web of data, as I said, we’ve got identity, events, relationships, and location. And right now, it’s JavaScript, especially for location, XML for everything else, and some server-side language. But there’s this wonderful movement where by all of these bits of data can be done very simply by anyone. For instance, identity, we have the hCard. Just, you know, a couple of class names and markup, and you’re essentially creating an API for your identity. Events, we have hCalendar. It maps events to some very simple class names. Relationships, XFN. Fairly simply way of doing it; you’d be surprised how far it scales. And location, we have geo coordinates. And if more people used all of these little formats, it’d be great. These are microformats, very basic, very simple, and it’s almost a way of turning the entire web into one giant API. You’ve seen it already on the backnetwork where we’re using the XFN relationship to join people up like this, and mashing it up with the live web through things like RSS and tagging, the very things that I mentioned, matched it up with relationships. And then you get some very interesting views, a relationship cloud of who knows who. So they work extremely well. They’re meant to be very simple but they scale up remarkably well.
And that’s the thing about microformats. I’ve gone on this journey from learning plain old HTML back in the day when I built that first website for my band, and then learned all those other technologies. I learned JavaScript, I learned PHP, and then MySQL, and then messed around with the APIs. And I did get a lot of fun with messing around with the APIs. But right now I have to say where I’m getting the most joy is from microformats because it has rekindled the joy of HTML, that original joy I got when I first went online. That is why I’ve organized a fairly impromptu microformats picnic because I want to talk about microformats a lot more. And I’m going to be done here in a few minutes. So out the back there’s a park. The weather looks pretty nice today. I’ve ordered some sandwiches, but I think you’re going to have to bring some food with you. Let’s all meet up in the park and we can talk about microformats because I’m very excited about them.
And now, I would like to hear your thoughts about other things that I’ve talked about here: APIs, mashups, anything like that. If you want any of the URLs that I’ve talked about today, I’ve put them all up online at this URL here: adactio.com/extras/joyofapi. You’ve been very gracious listening to me babble on about myself today, so thank you very much.
[applause]
Okay, hands up for questions. Oh, we’ve got bingo! Oh great, okay. Should we give out the prize now or the prize later?
Audience: Later.
Jeremy Keith: Okay, hands up for questions. Surely somebody has a question. Oh, there on the side, there’s one. Oh sorry, there’s one over here. Hang on a second.
David Barrett: Sorry, Jeremy. You say that it’s a shame that the alpha geeks are the only people that can use these APIs. But can you think of an use case where someone who wasn’t a geek would even want to use these?
Jeremy Keith: Yeah. You’ve got a blog. You don’t need to be a geek to have a blog. You don’t need to be a geek to have a live journal account or to have a Typepad account or to have your own website or to have a website about you favorite pets. That’s what was the great thing about the web. That’s what made the web explode when it was first introduced was that anyone could do it. Now of course what that meant was that it was a pretty sloppy web. Browsers were very forgiving in the markup that they allowed people to use and tried to do the best job that they could do anyway. But because anybody could create a website once they knew a bit of HTML, the web exploded. And there’s no reason why those same people should be excluded from getting the benefits of things like maps and events and all this other data. So anybody, because you don’t need to be a geek to have a website, so you shouldn’t have to be a geek to use an API. That is the spirit of the original web really.
We had one over on the side here, I think.
Audience member: Yeah, this isn’t so much a question as just an interesting thing that’s come out of both your talk and Paul and Simon’s, which is the question earlier about what happens when this all goes away. You used your example of trying to float between Google and Yahoo Maps. This is another service that people might be interested out there which is nothing to do with me. It’s called Mapstraction. And it’s actually, so the trick actually is going, an API extracts out what an application does into a series of methods, and the trick is then to extract on top of that. So what Mapstraction does, it’s one library, it’s Javascript, you put it in a page, and you say I want a map of this place, with a pin, with a pop-up, and then one global variable sets, whether it’s from Google, Yahoo, Microsoft, or OpenStreetMap, the point being that then when Google charge for their maps, you just change one variable and you move over to a competitor. But the trick is, you can do that for almost anything else. You can make layers of extraction.
Jeremy Keith: Yes, it’ll be good to see more of these libraries of libraries, these meta-APIs that let you just switch between data providers at your call. But as Paul pointed out, that’s the great thing about a lot of these APIs. The first one to market gets to dictate the scene, and other APIs, if they want to compete, they had better follow the same sort of structure in their API or nobody’s going to use it. That’s the great thing.
One of the other things I want to mention about what if these services go away, what happens to my data and stuff, that’s actually one of the benefits of APIs, For instance, with Flickr, why should I trust Flickr with my photographs? Well, because of the API you know you’ll be able to get your photographs out any time you want. I’m not just talking about the alpha geeks who can code up something to get their data out. There’s third-party providers who will burn all your Flickr pics to DVD. You authorize them, you give them your Flickr user name, password, and they will do the burning for you using the API.
So APIs actually encourage trust. They say don’t worry, your data is safe with us, look, we’ve got an API, any time you want you’ll be able to take your data with you.
Another question? I think the first hand might have been down here, over here.
Paul Boag: Hi. You already mentioned that, on your own site, you’ve experienced some flakiness where things have gone down. Surely the more these APIs you include in them, the more calls you’re making to them, the more different services and different servers, and the more unstable you’re making your own site really, I mean are there ways around that to kind of mitigate that risk?
Jeremy Keith: Well, something like Mapstraction, like I was saying, if you had something that you could easily flick between providers to say, oh, Google Maps is being really flaky today, I’m switching over to Yahoo! Maps, the more providers there are, the more times you have that option, but also the way you build things. You don’t want to make these things mission critical, really I suppose, but it’s like Unix pipes. You have lots of these different sort of things going on at once.
But this idea of relying on other services isn’t anything new. Unless you’re hosting your site yourself, you’re probably relying on a third-party hosting provider. Your stats package is probably sitting off somewhere, you’re relying on some other third party for that. So this idea of relying on third parties for, sometimes, very mission-critical stuff, isn’t really new. Most of us rely on third parties for email, which is about as critical as it gets. This idea of trust and trusting data providers isn’t that new, really, it’s just taking it to the next level.
Patrick Lauke: Hi, Jeremy. Just on the flakiness of services, one thing that I’ve seen that I’ve been doing on my own site is, it’s sometimes a good idea to do some caching so that you don’t always try to hit the latest live data but just mitigating it by, I don’t know, if you can afford to have ten, 15 minute stale data, that kind of helps overcome kind of initial problems if the server just happens to be down or being kind of overly busy.
Jeremy Keith: Yes, that’s the savvy thing to do. I’m not tech-savvy enough to do that. Glenn, when he built the backnetwork, he is tech savvy, so he’s not calling Flickr every time, he’s set a polling time which, a few weeks ago, he was just polling Flickr once a day. As it comes up to d.Construct he’s polling more and more often, so as he gets closer to the live feed on the day of d.Construct, but yes, caching is generally a good idea. If I was smart I would do that more but I’m not really clever enough.
Jenifer Hanen: Thanks, Jeremy. I’ve heard that, no offense to him, Tantek speak on this at least three times over the last three and half years and you just summed up, especially the microformats, in a way that makes sense. Now can I request that the two of you speak together in the future so that way you will give the broad, good background and then he can give the detail?
Jeremy Keith: Well, we actually did at South By Southwest last year. Tantek talked about microformats and gave the whole background of them, and then he had a few of us come up and do implementations so Norm was there and he showed off Yahoo! Europe using hReviews and I went up and showed all the parties in Austin, because that was what I had used microformats for.
I started using microformats everywhere so a lot of those mashups you would have seen using Google Maps, the parties in Austin, the d.Construct location page, all of that’s marked up in microformats. I’d like to say that makes your web site a API straight away. The schedule for d.Construct, that was marked up in hCalendar. You would have seen a link in the sidebar to a third-party service which is hosted on Technorati, which will turn the hCalendar into an actual iCal that you can subscribe to, and then you can put that on your mobile phone. If the web page gets updated, the calendar will get updated as well. You get to take this data with you. You don’t have to duplicate the data. It’s just so easy to do; I’ve just started doing it automatically now. I want to try to turn as many of my websites as possible into APIs. The easiest way to do that is by providing microformats.
Paul and Simon made a great job of selling the benefits of providing an API and of having an API if you are a company, but you have to convince people, as someone mentioned, to do this, to put the money and effort and, and it can be a lot of hassle. Microformats are a nice starting point. If you just convince people, and you don’t even have to tell them, just throw in a few class names. It’s as simple as that. Straight away, you’ve allowed access to your data.
That’s what Norm did at Yahoo! Europe; he just threw in a few class names on Yahoo! Local, I think it was, and suddenly there was thousands and thousands and thousands of hReviews, hCards and all sorts of stuff on the web. Anybody could go out there with a parser and use that data. It was essentially a brand new API the next morning. So, it’s a good way to start. APIs can be intimidating, and microformats are a nice sort of way of getting in the door of the API mentality, this idea of opening up my data.
We’ve got one way down in the front. Paul, down here in the second row.
Audience member: Hi there. I’m the editor of a big cultural website called the 24 Hour Museum. We’ve done RSS for four years now. It’s a fantastic part — it’s the bloodstream that makes everything work and join together. We’re really fascinated with APIs and web services, and we’ve got a fantastic live database, which is added to and kept live all the time by museums, galleries and heritage sites all over Britain. What worries me is if we are looking to build an API interface, if it takes a six months to put together, how long have we got of good, solid, reliable life working with that API interface and those standards before things move on again? This is what may be holding back the big cultural websites, government funders and this sort of stuff. This is what’s holding us back.
Jeremy Keith: Did everyone hear the question? How do you start building API? It could take six months, it’s what’s holding people back, it seems like such a complicated thing to do, and in six month’s time it might be out of date, then you have to go to the next version of the API.
I would say you already have an API because you’re providing RSS feeds. RSS is XML; it’s a RESTful interface onto your data. You probably have lists of things. If you can provide an RSS equivalent for everything on your site, that’s wonderful. A lot of sites do this already: Flickr, Upcoming, all of these sites.
Pretty much anything you can get in a list, you can get as an RSS feed: my latest pictures, latest pictures from my friends, my latest events, links with a certain tag. As well as being able to get that on the website or through the official API, which is more complex, there is always an RSS feed for this stuff because it’s generally pretty easy to put an RSS feed together. It’s always got the same structure, an item with a description, title and a URL, that’s the important bit.
Once you’ve got that, you already have an API because people can just grab that URL and parse it because it’s XML and do what they want with it. So, providing RSS is an API. If you also use microformats on the site and you already have listings — you can use hListings, hReviews, all these little bits of data — then if you don’t have the budget or the time to put into an API, that’s where you should start, just doing small stuff. It’s pretty simple. RSS is a great way to start because RSS is an API. It’s a restful interface onto data. So RSS, very cool stuff, Atom as well, I’m not going to get into a flame war.
Do we have time for another question? I think we do.
Suw Charman: Hi. Actually, it’s not a question; I hope you’ll forgive me. I was hoping to do a little Birds Of a Feather session for the Open Rights Group. Basically, if everyone wants to go grab and lunch, then we will also meet up in the park. Then maybe we can heckle your microformats.
Jeremy Keith: I think maybe a Jets versus the Sharks sort of situation will be good. We can face off with each other. That’s what we’ll do. So, Open Rights Group meeting in the park, and microformats meeting in the park as well. Right back there.
I’ve ordered some sandwiches; I hope they have shown up. They’re here? Great, sandwiches are here, but they’re micro sandwiches so they’re not going to go very far! You might want to grab some sushi down the road or some sandwiches from a nearby shop. I’ll head over to the park now and we’ll get things started. Thanks for the questions; great stuff!
[Applause]
A talk I gave at Reboot 8 in Copenhagen.
Web 2.0. Love the term or hate it, you’ve certainly heard it. Even if you’re a hardened cynic and you pride yourself on not drinking Tim O’Reilly’s koolaid, it’s hard to deny that something is going on: something new, something that is just the start of a brave new world 2.0.
The theme of this year’s Reboot is renaissance. It doesn’t take much of a stretch to compare that term with the ubiquitous “Web 2.0”.
The common perception of 15th century Northern Italy is to view it as the birthplace of a whole new movement in art and culture: a Culture 2.0, if you will. We tend to think of the Renaissance as an almost revolutionary movement, sweeping aside the old-school 1.0 dark ages.
But the Renaissance didn’t come out of nowhere. The word itself means rebirth, not birth. The movers and shakers of the Renaissance — the analogerati of Florence — weren’t trying to make a break with the past. They were trying to get back to their roots. At its heart, the Renaissance was a very conservative movement with an emphasis on reviving and preserving classical ideas. By classical, I mean Greek and Roman. There is a direct line of descent from the Acropolis in Athens to the beautiful buildings built in Copenhagen during the Danish Renaissance. The building blocks of the Renaissance were centuries-old ideas about mathematics, aesthetics, and science.
There is a lesson for us there. With all this talk of a Web 2.0, there’s a danger that we as web developers, whilst looking to the future, are forgetting our past. In our haste to forge a new kind of World Wide Web, we run the risk of destroying the fundamental building blocks that helped create the Web that we fell in love with in the first place.
I don’t intend to run through all the building blocks that form the foundation of the Web. Each one deserves its own praise. HTTP, for example, the protocol that enables the flow of pages on the Web, is worthy of its own love letter.
I’d like to focus on one very small, very simple, very beautiful building block: the hyperlink.
The hyperlink is an amazing solution to an old problem. That problem is classification.
Language is the most powerful tool ever used by man. Together with its offspring writing, language enables us to document things, ideas, and experiences. I can translate a physical object into a piece of information that can be later retrieved, not only by myself, but by anyone. But there are economies of scale with this kind of information storage and retrieval. The physical world is a very, very big place filled with a multitude of things bright and beautiful; creatures great and small. If we could use the gift of language to store and retrieve information on everything in the physical world, right down to the microscopic level, the result would be transcendental.
To see a world in a grain of sand And heaven in a wild flower
The first person to seriously tackle the task of cataloguing the world was born after the Renaissance. Bishop John Wilkins lived in England in the 17th century. He was no stranger to attempting the seemingly impossible. He proposed interplanetary travel three centuries before the invention of powered flight. He is best remembered for his 1668 work, An Essay towards a Real Character and a Philosophical Language.
The gist of Wilkins’s essay is explained by Jorge Luis Borges in El idioma analítico de John Wilkins (The Analytic Language of John Wilkins).
He divided the universe in forty categories or classes, these being further subdivided into differences, which was then subdivided into species. He assigned to each class a monosyllable of two letters; to each difference, a consonant; to each species, a vowel. For example: de, which means an element; deb, the first of the elements, fire; deba, a part of the element fire, a flame.
You can find more delvings into Borges’s essay on a weblog by Matt Webb; the fittingly named interconnected.org.
The problem with Wilkins’s approach will be obvious to anyone who has ever designed a relational database. Wilkins was attempting to create a one to one relationship between words and things. Apart from the sheer size of the task he was attempting, Wilkins’s rigidity meant that his task was doomed to fail.
Still, Wilkins’s endeavour was a noble one at heart. One of his contemporaries recognised the value and scope of what Wilkins was attempting.
Gottfried Wilhelm von Leibniz possessed one of the finest minds of his, or any other, generation. It’s a shame that his talent has been overshadowed by the spat between Newton and himself caused by their simultaneous independent invention of calculus.
Leibniz wanted to create an encyclopaedia of human knowledge that was free from the restrictions of strict hierarchies or categories. He recognised that concepts and notions could be approached from different viewpoints. His approach was more network-like with its many to many relationships.
Where Wilkins associated concepts with sounds, Leibniz attempted to associate concepts with symbols. But he didn’t stop there. Instead of just creating a static catalogue of symbols, Leibniz wanted to perform calculations on these symbols. Because the symbols correlate to real-world concepts, this would make anything calculable. Leibniz believed that through a sort of algebra of logic, a theoretical machine could compute and answer any question. He called this machine the Calculus ratiocinator. The idea is a forerunner of Turing’s universal machine.
The general idea of a computing machine is nothing but a mechanisation of Leibniz’s calculus ratiocinator. - Norbert Wiener, Cybernetics, 1948
Let me tell you about another theoretical device. It’s called the memex (short for “memory extender”). This device was proposed by Vannevar Bush in 1945 in an article in the The Atlantic Monthly called As We May Think. Bush described the memex as being electronically linked to a library of microfilm. The device, contained within a desk, would be capable of following cross-references between books and films. This almost sounds like hypertext.
But there may be a form of proto-hypertext that precedes the memex.
In recent years the works of James Joyce have been revisited and re-examined through the prism of hypertext. Ulysses and Finnegan’s Wake make sense when viewed, not linearly, but as a network of interconnected ideas. Marshall McLuhan was heavily inspired by Joyce’s communication technology. The medium was very much the message.
For most of us, Finnegan’s Wake remains an impenetrable book, at least in the narrative sense. It might make more sense to us if we suffered from a medical condition called apophenia: the perception of connections and meaningfulness in unrelated things.
This isn’t necessarily an affliction. In his book Pattern Recognition, William Gibson describes an apopheniac cool-hunter hired by marketers to detect the presence of Gladwellian tipping points in a product’s future.
Apophenia is a boon for conspiracy theorists. If you’re fond of a good conspiracy theory, I recommend staying away from the linear and predictable Da Vinci Code. For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.
…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…
For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.
There was no magical “Eureka!” moment in the invention of the Web. It was practical problem solving, not divine revelation that resulted in the building blocks of Universal Resource Identifiers (URIs), the Hypertext Transfer Protocol (HTTP), and the Hypertext Markup Language (HTML). Berners-Lee’s proposal built on top of the work already done by Ted Nelson and Douglas Engelbart, the inventors of the first true hypertext system in 1965.
If there was anything revolutionary about the World Wide Web, it was the fact that it was not patented, instead being declared free for all to use. That spirit of scientific sharing clearly didn’t rub off on British Telecom who attempted to enforce a patent which they claimed gave them intellectual property rights over the concept of hyperlinks. The claim was, fortunately, laughed out of court.
The World Wide Web is the ultimate MVC framework. URIs are the models controlled by HTTP and viewed through HTML. While the view may seem like the least significant component, it is the simplicity of HTML that is responsible for the explosive growth of the Web.
There was nothing new about markup languages. Standard Generalised Markup Language had been around for years. Before that, red pens allowed editors to literally mark up text to indicate meaning.
Like SGML, HTML used tags — delineated with angle brackets — to nest parts of a document in descriptive containers called elements. The P element can be used to describe a paragraph, the H1 element describes a level one heading, and so on.
The shortest element is the most powerful. A stands for anchor. Nestled within the anchor element is the href attribute. This attribute, short for hypertext reference, is the conduit through which the dreams of Leibniz, Joyce, and a thousand conspiracy theorists can finally be realised.
The vision I have for the Web is about anything being potentially connected with anything.
Anybody could create anchors containing hypertext references. Just about everybody did. The Web grew exponentially in size and popularity. With every new web page and every hyperlink, the expanding Web became a more valuable and powerful aggregate resource.
This power was harnessed by Sergey Brin and Lawrence Page. The concept behind their PageRank algorithm is simple: links are a vote of confidence. If a lot of links point to the same page, that page is highly regarded. By combining this idea with traditional page analysis, they created the best search engine on the Web; Google.
In order to measure the PageRank of everything on the Web, the googlebot spider was unleashed. In some ways, the googlebot is like any other user agent: it visits web pages and follows links. It’s also possible to see the googlebot as a kind of quantum device.
When you or I visit a web page that has, say, ten links, there are two theories about what happens. According to superposition, the next page we visit exists only as a probability. Not until we make a decision and click on a link does the page resolve into one of the ten possibilities.
The alternative view is the many worlds interpretation. According to this theory, visiting a page with ten links would cause the universe itself to branch into ten different universes. You or I will remain in the universe that matches the link we clicked. But the googlebot is different: it follows all ten links at once, spidering alternate worlds.
I have first hand experience of Google’s stockpile of parallel universes. To celebrate Talk Like a Pirate Day, I created a simple server-side script. You can pass in the URL of a web page and the script will display the contents interspersed with choice pirate phrases such as “arr!”, “shiver me timbers!”, and “blow me down!”. The script also rewrites any hrefs in the page so that the pages they point to are also run through the pirate-speak transmogrifier.
It was amusing. It even appeared on Metafilter. The problems started later on. I began to get irate emails, even phone calls, from website owners demanding that I remove their files from my server. I was even threatened with the Digital Millennium Copyright Act. I was fielding angry emails from people all over the world in charge of completely disparate websites.
The googlebot had landed on my Talk Like a Pirate page (perhaps it followed a link from Metafilter). Then it began to spider. It never stopped. Somewhere within the Googleplex there is a complete one to one scale model of the World Wide Web that’s written entirely in pirate-speak.
Now when site owners do a search for their websites to check their ranking, the pirate facsimile often appears before the original. I can’t help it if my Googlejuice is better than theirs.
I began to feel remorse when I heard from the proprietor of a spinal surgery clinic in Florida who told me that potential customers were being scared away by messages detailing “professional treatment, me hearties!”
I have since added a robots.txt file but it can be a long time between googledances. Parallel universes don’t just disappear overnight. I guess the googlebot isn’t a quantum device after all because, it seems, it can’t be everywhere at once. That’s where it falls down. How is it supposed to deal with websites that are updated frequently?
Trying to define what a blog is can be a slippery task. Most definitions include the words “online journal”. I’ve been told that my online journal isn’t a blog because I don’t have comments enabled. I must have missed the memo.
What really makes a blog a blog isn’t the addition of comments or the fact that it’s an online journal. The defining characteristic of a blog is the presence of permalinks. Permalink, a portmanteau word from permanent and link, should be a tautology. All links should be permanent.
Permalinks, and by extension, blogs, encourage linking. Instead of simply saying “here’s my opinion”, blogs allow us to say “here’s a permanent linkable address for my opinion.”
The earlist blogs were link logs, places you could visit to find links that somebody thinks are worth visiting. Even now, I find that the best blog posts are often ones that point out the connections between seemingly separate links. Bloggers are natural apopheniacs; conspiracy theorists who can back up their claims not just with references to their sources but with hypertext references… hrefs.
Even though all blogs have permalinks, there’s something inherently transient in the nature of blogging. It’s a tired cliché but the aggregate web of blogs really is like a conversation. The googlebot can’t hope to follow all the links spawned by all these voices speaking at once. Technorati does okay though.
Technorati is also the breeding ground for some infectious little ideas called microformats. These microformats embrace and extend the Hypertext Markup Language. Making use of the little-known rel attribute, the anchor element can be made even more powerful. In XFN, XHTML Friends Network, the addition of rel equals friend, colleague, met, and other descriptors adds extra semantic weight to a link (as yet there is no Enemies Network, but Brian Suda and I are working on a draft specification).
The bonus semantics offered by microformats can be harvested and collated to form a clearer picture of the connections that were previously less defined.
Microformats are the nanotechnology for building a semantic web.
That’s lowercase semantic web. The uppercase Semantic Web still lies in our future. Another theoretical future technology is XHTML 2, wherein any element whatsoever can have a hypertext reference.
Perhaps we aren’t worthy of such a bounty of hrefs. Right now hrefs exist only in the anchor element and yet still we manage to abuse them.
a href="javascript:..."
I like JavaScript. It is, as Douglas Crockford put it, the world’s most misunderstood programming language. The problem lies not with the JavaScript language but its integration into hypertext documents.
The javascript pseudo-protocol is an abomination. It is not a valid hypertext reference.
a href="#" onclick="..."
Using a pointless empty internal page reference is almost as bad. If you can’t provide a valid resource for the href value, don’t use the anchor element. Anchors are for links. Don’t treat them as empty husks upon which you hang some cool Ajaxian behaviour. Respect the link.
If we value and cherish the links of today, who knows what the future may bring?
Maybe Bruce Sterling is right. Maybe we’ll have an internet of things. Spimes, blogjects, thinglinks… whatever the individual resources are called, they’ll have to be linkable: hyperlinked addressable objects existing in our regular, non hyper-, space.
It sounds like an exciting future. We live in an equally exciting present.
We have all come together here in Copenhagen because of how much we love the World Wide Web. I bet every one of you has a story to tell about the first time you “got” the Web. Remember that thrill? Remember the realisation that you were interacting with something that was potentially neverending; a borderless labyrinth of information, all interconnected through the beautiful simplicity of the hyperlink. We may have grown accustomed to this miracle but that doesn’t make it any less wondrous.
We are storytellers, no longer huddled around separate campfires, we now sit around a virtual hearth where we are warmed by the interweaving tales told by our brothers and sisters. Everyone is connected to everyone else by just six degrees of separation. Thanks to the hyperlink, we can find those connections and make them tangible.
The dream of hypertext has become a reality.