Tags: APIs

14

sparkline

Just what is it that you want to do?

The supersmart Scott Jenson just gave a talk at The Web Is in Cardiff, which was by all accounts, excellent. I wish I could have seen it, but I’m currently chilling out in Florida and I haven’t mastered the art of bilocation.

Last week, Scott wrote a blog post called My Issue with Progressive Enhancement (he wrote it on Google+, which is why you might not have seen it).

In it, he takes to task the idea that—through progressive enhancement—you should be able to offer all functionality to all browsers, thereby foregoing the use of newer technologies that aren’t universally supported.

If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).

The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.

Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.

If you’re building an e-commerce site, then being able to add items to a shopping cart and being able to check out are your core tasks. Once you’ve got that working with good ol’ HTML form elements, then you can go crazy with your enhancements: animating, transitioning, swiping, dragging, dropping …the sky’s the limit.

This is exactly what Orde Saunders describes:

I’m not suggesting that you try and replicate all your JavaScript functionality when it’s disabled, above all that’s just not practical. What you should be aiming for is being able to complete the basics - for example adding a product to a shopping cart and then checking out. This is necessarily going to be clunky as judged by current standards and I suggest you don’t spend much time on optimising this process.

Scott asked about building a camera app with progressive enhancement:

Here again, the real question to ask is “what is the core functionality?” Building a camera app is a means to an end, not the end itself. You need to ask what the end goal is. Perhaps it’s “enable people to share photos with their friends.” Going back to good ol’ HTML, you can accomplish that task with:

<input type="file" accept="image/*">

Now that you’ve got that out of the way, you can spend the majority of your time making the best damn camera app you can, using all the latest browser technologies. (Perhaps WebRTC? Maybe use a canvas element to display the captured image data and apply CSS filters on top?)

Scott says:

My point is that not everything devolves to content. Sometimes the functionality is the point.

I agree wholeheartedly. In fact, I would say that even in the case of “content” sites, functionality is still the point—the functionality would be reading/hearing/accessing content. But I think that Scott is misunderstanding progressive enhancement if he think it means providing all the functionality that one can possibly provide.

Mat recently pointed out that there are plenty of enhancements on the Boston Globe site that require JavaScript, but the core functionality is available to everyone:

Scott again:

What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?

Again, a realtime chat app is a means to an end. What is it enabling? The ability for people to talk to each other over the web? Okay, we can do that using good ol’ HTML—text and form elements—with full page refreshes. That won’t be realtime. That’s okay. The realtime part is an enhancement. Use Web Sockets and WebRTC (in the browsers that support them) to provide the realtime experience. But everyone gets the core functionality.

Like I said, the trick is figuring out what’s core functionality and what’s an enhancement.

Ethan provides another example. Let’s say you’re building a browser-based rich text editor, that uses JavaScript to do all sorts of formatting on the fly. The core functionality is not the formatting on the fly; the core functionality is being able to edit text:

If progressive enhancement truly meant making all functionality available to everyone, then it would be unworkable. I think that’s a common misconception around progressive enhancement; there’s this idea that using progressive enhancement means that you’re going to spend all your time making stuff work in older browsers. In fact, it’s the exact opposite. As long as you spend a little bit of time at the start making sure that the core functionality works with good ol’ fashioned HTML, then you can spend most of your time trying out the latest and greatest browser technologies.

As Orde put it:

What you are going to be spending the majority of your time and effort on is the enhanced JavaScript version as that is how the majority of your customers will be experiencing your site.

The other Scott—Scott Jehl—wrote a while back:

For us, building with Progressive Enhancement moves almost all of our development time and costs to newer browsers, not older ones.

Progressive Enhancement frees us to focus on the costs of building features for modern browsers, without worrying much about leaving anyone out. With a strongly qualified codebase, older browser support comes nearly for free.

Approaching browser support this way requires a different way of thinking. For everything you’re building, you need to ask “is this core functionality, or is it an enhancment?” and build accordingly. It takes a bit of getting used to, but it gets easier the more you do it (until, after a while, it becomes second nature).

But if you’re thinking about progressive enhancement as “devolving” down—as Scott Jenson describes in his post—then I think you’re on the wrong track. Instead it’s about taking care of the core functionality quickly and then spending your time “enhancing” up.

Scott asks:

Shouldn’t we be allowed to experiment? Isn’t it reasonable to build things that push the envelope?

Absolutely! And the best and safest way to do that is to make sure that you’re providing your core functionality for everyone. Once you do that, you can go nuts with the latest and greatest experimental envelope-pushing technologies, secure in the knowledge that you don’t even need to worry about the fact that they don’t work in older browsers. Geolocation! Offline storage! Device APIs! Anything you can think of, you can use as a powerful enhancement on top of your core tasks.

Once you realise this, it’s immensely liberating to use progressive enhancement. You can have the best of both worlds: universal access to core functionality, combined with all the latest cuting-edge technology too.

Get excited and make things with science

There are many reasons to go to South by Southwest Interactive: meeting up with friends old and new being the primary one. Then there’s the motivational factor. I always end up feeling very inspired by what I see.

This year, that feeling of inspiration was front and centre. First off, I tried to impart some of it on the How to Rawk SXSW panel, which was a lot of fun. Mind you, I did throw some shit at the fan by demonstrating how wasteful the overstuffed schwag bags are. I hope I didn’t get MJ into trouble.

My other public appearance was on The Heather Gold Show which was bags of fun. With a theme of Get Excited and Make Things, the topic of inspiration was bandied about a lot. It was a blast. Heather is a superb host and the other guests were truly inspirational. I discovered a kindred spirit in fellow excitable geek, Gina Trapani.

The actual panels and presentations at SXSW are the usual mixture of hit and miss, although the Cooking For Geeks presentation was really terrific. Any presenter who hacks the audience’s taste buds during a presentation is alright with me.

But by far the most inspirational thing I’ve seen was a panel hosted by Tantek on Open Science. The subject matter was utterly compelling and the panelists were ludicrously articulate and knowledgeable:

The URLs were flying thick and fast: the Signtific thought experiment game, the collaborative Galaxy Zoo—now joined by Moon Zoo—and the excellent Spacehack directory.

I was struck by the sheer volume of scientific data and APIs out there now. And yet, we aren’t really making use of it. Why we aren’t we making mashups using Google Mars? Why haven’t I built a Farmville-style game with Google Moon?

Halfway through the panel, I turned to Riccardo and whispered, We should organise a Science Hack Day.

I’m serious. It would probably be somewhere in London. I have no idea where or when. I have no idea how to get a venue or sponsors. But maybe you do.

What do you think? Everyone I’ve mentioned the idea to so far seems pretty excited about the idea. I’ll try to set up a wiki for brainstorming venues, sponsors, APIs, datasets and all that stuff. In the meantime, feel free to leave a comment here.

I got excited. Now I want to make things …with science! Are you with me?

Loosely joined

The mighty Zeldman has written a thought-provoking piece called The Vanishing Personal Site which chronicles the changing nature of personal publishing. Where once we had a central URL that defined our online presence, people are increasingly publishing in fragments distributed across services like Twitter, Pownce, Flickr and Magnolia. It was this fragmentation that spurred my first dabblings with APIs to produce Adactio Elsewhere which I did three years ago to the day.

Jeff takes a different approach by incorporating all of those other publishing points directly back into his site rather than a separate aggregation area. This approach seems to be gaining ground.

One of the comments to Jeffrey’s post points to the newly launched website of the architect Denna Jones built in part by Jon Tan who describes the thinking behind it. The site is driven entirely by third-party services like Tumblr, Del.icio.us and Flickr. Jon, by contrast, has his third-party publishing aggregated on a page called Asides, similar to Adactio Elsewhere.

I think most people, even if they are micro-publishing in many places, still have one URL that they consider as their online representation. It might be a blog, it might be a Flickr profile, or for many people, it might be a Facebook account.

It will be interesting to watch these trends develop. Something else I’m going to watch is Jon Tan’s website. It’s dripping with gorgeous typography wrapped in an elastic layout. How is that I haven’t come across this site before? Why wasn’t I informed?

Mi.gration

I’ve used del.icio.us for quite a while now. I’m storing 1159 bookmarks, each one of them tagged. It works just fine but it also feels a little, I don’t know …stale. There is supposedly a redesign in the works but I’m not sure that I want to wait around any longer to find out if they’re finally going to put some microformats in the markup.

Instead, I’m moving over to Magnolia. I’ve had a Magnolia account for years but I’ve never really used it. I didn’t see the point while I had a del.icio.us account. But whereas de.icio.us appears stagnated, Magnolia seems to be constantly innovating. Also, it uses microformats. There’s also the fact that I know Larry and I’ve briefly met Todd (lovely gents, both) but I don’t know Joshua Schachter. That shouldn’t matter but it kind of does.

Moving from del.icio.us to Magnolia is very straightforward. But that alone wasn’t going to be enough for me. I’m also accessing my del.icio.us bookmarks through the API. It turns out that Magnolia provides an ingenious way to ease my pain. As well as providing , Magnolia also provides . All I had to do was change some URL endpoints and I had Adactio Elsewhere switched over in no time. Other services take note: providing mirrored versions of your competitors’ APIs eases the pain of migration.

I’ve updated my feedburner RSS feed to point it at my Magnolia links instead of my del.ious.us links. If you were subscribed to my del.icio.us feed separately, you’ll probably want to update your feedreader to point to my Magnolia links instead.

It remains to be seen whether I’ll stay at Magnolia. Even though it is functionally and cosmetically superior to del.icio.us, that might not be enough. After all, Jaiku is superior to Twitter in almost every way—design,markup, reliability—but Twitter still wins. That’s mostly because that’s where all my friends are. Right now my bookmarking friends are split fairly evenly between del.icio.us and Magnolia. Then again, I’ve never really made much use of the “social” part of “social bookmarking”.

So who knows? Maybe I’ll end up moving back to del.icio.us at some stage. It’s reassuring to know that moving my data around between these services is pretty straightforward: I can export from Magnolia and import into del.icio.us any time I want.

Help me at Hackday

Hackday is almost upon us. Tomorrow, I—along with hundreds of other geeks—will be converging on Alexandra Palace in North London for two days of dev fun.

I’ve got an idea for what I want to do but I think I’ll need lots of help. At XTech, Reboot, @media and other recent geek gatherings I’ve been asking who’s coming and who fancies helping me out. I’ve managed to elicit some interest from some very smart people so I’m hoping that we can hack something fun together.

Here’s the elevator pitch for my idea: online publishing is hacking and slaying.

Inspired by Justin Hall’s idea of Passively Multiplayer Online Games and Gavin Bell’s musings on provenance, I want to treat online publishing as an ongoing way of building up a character. In Dungeons and Dragons or World of Warcraft, you acquire attributes like stamina, strength, dexterity and skill over time. Online, you publish Flickr pictures, del.icio.us links, Twitter updates and blog posts over time. All of this published material contributes to your online character and I think you should be rewarded for this behaviour.

It’s tangentially related to the idea of a lifestream which uses RSS to create a snapshot of your activity. By using APIs, I’m hoping to be able to build up a much more accurate, long-term portrait.

I’m going to need a lot of clever hackers to help me come up with the algorithms to figure out what makes one person a more powerful Flickrer or Twitterer than another. Once the characteristics have been all figured out, we can then think about pitching people against each other. Maybe this will involve a twenty-sided die, maybe it will more like Top Trumps, or maybe it could even happen inside Second Life or some other environment that has persistent presence (the stateless nature of the Web makes it difficult to have battles on a Web site). I have a feel that good designers and information architects would be able to help me figure out some other fun ways of representing and using the accumulated data. Perhaps we can use geo data to initiate battles between warriors in the same geographical area.

Sound like fun? Fancy joining in? Seek me out on the day or get in touch through my backnetwork profile.

Of course, if you want to do something really cool at hackday, you’ll probably be dabbling with arduino kits, blubber bots and other automata. When I was San Francisco a few weeks ago, nosing around the Flickr offices, Cal asked me what I was planning for Hackday. “Well” I said, “it involves using APIs to…” “Pah!” he interrupted, “APIs are passé. Hardware is where it’s at.”

Machine Tags of Loving Grace

One of the highlights of Refresh Edinburgh for me was listening to Dan Champion give a presentation on his new site, Revish. He talked through the motivation, planning and production of the site. This was an absolute joy to listen to and it was filled with very valuable practical advice.

Revish is a book review site with a heavy dollop of social interaction. Even in its not-quite-finished state, it’s pushing all the right buttons with me:

  • The markup is clean, semantic and valid.
  • The layout is uncluttered and flexible.
  • The URL structure is logical.
  • The data is available through microformats, RSS and an API.

There’s some really smart stuff going on with the sign-up process. If your chosen username matches a Flickr username, it automatically grabs the buddy icon. At the sign-up stage you also have the option of globally disabling any Ajax on the site—an accessibility option that I advocate in my book. Truth be told, there isn’t yet any Ajax on the site but the availability of this option shows a lot of forethought.

Also at the sign-up stage, there’s a quick’n’dirty auto-discovery of contacts wherever there’s overlap with Revish usernames and your Flickr contacts. This is very cool—one small step toward portable social networks.

One of the features dovetails nicely with Richard’s recent discussion about machine tags ISBNs. If you tag a picture of a book on Flickr with book:isbn=[ISBN number], that picture will then show up on the corresponding Revish page. You can see it in action on the page for Bulletproof Ajax.

Oh, and don’t worry about whether a book has any reviews on Revish yet: the site uses Amazon’s API to pull in the basic book info. As long as a book has an ISBN, it has a page on Revish. So the Revish page for a book can effectively become a mashup of Amazon details and Flickr pictures (just take a look at the page for John’s new microformats book).

I like this format for machine tagging information related to books. As pointed out in a comment on Richard’s post, this opens up the way for plenty of other tagging like book:title="[book title]" and book:author="[author name]".

I’ve started to implement this machine tag format here. If you look at my last post—which has a whole list of books—you’ll see that I’ve tagged the post with a bunch of machine tags in the book:isbn format. By making a quick call to Amazon, I can pull in some information on each book. For now I’m just displaying a small cover image with a link through to the Amazon page.

That last entry is a bit of an extreme example; I’m assuming that most of the time I’ll be just adding one book machine tag to a post at most, probably to accompany a review.

Machine tags (or triple tags) is still a relatively young idea. Most of the structures so far have been emergent, like Upcoming and Last.fm’s event tags and my own blog post machine tags. There’s now a site dedicated to standardising on some namespaces—MachineTags.org has a blog, a wiki and a mailing list. Right now, the wiki has pages for existing conventions like geo tagging and drafts for events and book tagging. This will be an interesting space to watch.

Ghost in the Machine Tags

Richard has some very nifty ideas up his sleeve for the next iteration of his site. Some of these are design-related and some are technical. He just gave a peek into the technical side of things by explaining how he’s using tags to tie content together. Not just any old tags, mind: machine tags.

You may remember that Flickr rolled out machine tags a while back. That’s their name for what’s basically tripletags; tags that take the form of namespace:predicate=value. There’s some tight integration between Upcoming and Flickr using the machine tag upcoming:event=[ID]. You can see a looser coupling (one way rather than bi-directional) in the recently-updated events section of Last.fm which uses lastfm:event=[ID]. As an example, take a look at the page for a Low Lows concert I went to and took pictures of.

Richard is making use of machine tagging to associate his Flickr pictures with his blog posts. He’s also planning to use Amazon’s API to associate ISBN numbers with blog posts, raising the question of which namespace to use:

We therefore need a triple-tag version of the ISBN tag, and here’s my suggestion: iso:isbn=0713998393. ISBN is a standard recognised by the International Organisation for Standardization (ISO) so I thought it made a certain sense for ISO to be the namespace. Other standardised entities could be tagged in a similar way, such as iso:issn=15340295.

Seems like a sound idea to me. I might experiment with machine tagging reviews here in that way and then pulling in complementary information from Amazon.

But that’s for another day. For now, I’ve gone ahead and integrated Flickr machine tagging here… but this works from the opposite direction. Instead of tagging my blog posts with flickr:photo=[ID], I’m pulling in any photos on Flickr tagged with adactio:post=[ID].

Now, I’ve already been integrating Flickr pictures with my blog posts using regular “human” tags, but this is a bit different. For a start, to see the associations using the regular tags, you need to click a link (then the Hijax-y goodness takes over and shows any of my tagged photos without a page refresh). Also, this searches specifically for any of my photos that share a tag with my blog post. If I were to run a search on everyone’s photos, the amount of false positives would get really high. That’s not a bug; it’s a feature of the gloriously emergent nature of human tagging.

For the machine tagging, I can be a bit more confident. If a picture is tagged with adactio:post=1245, I can be pretty confident that it should be associated with http://adactio.com/journal/1245. If any matches are found, thumbnails of the photos are shown right after the blog post: no click required.

I’m not restricting the search to just my photos, either. Any photos tagged with adactio:post=[ID] will show up on http://adactio.com/journal/[ID]. In a way, I’m enabling comments on all my posts. But instead of text comments, anyone now has the ability to add photos that they think are related to a blog post of mine. Remember, it doesn’t even need to be your Flickr picture that you’re machine tagging: you can also machine tag photos from your contacts or anyone else who is allowing their pictures to be tagged.

I realise that I’m opening myself up for a whole new kind of spam. But any kind of spam that requires namespaced tagging on a third-party site is pretty dedicated. If someone actually goes to that much effort to put a thumbnail of an inappropriate image at the end of one of my blog posts, I probably wrote something particularly inflammatory in that post—which would make the associated thumbnail a valid comment, I guess.

Here are some examples of posts I’ve been machine tagging on Flickr:

Once again, like Upcoming and Last.fm, these are event-based. But the machine tagging would work equally well for location-based posts. So when I go up to Scotland next week and blog about it, I (or you or anybody) can then go to Flickr, find some nice pictures of Edinburgh and using the adactio namespace, associate the pictures with the blog post.

It’s a strange mixture of RESTful URLs here and taggable objects there.

If nothing else, this will be an interesting experiment. Machine tags don’t have the low barrier to entry of regular tagging but they aren’t as complex as something like RDF. It might be that they hit the sweet spot between accuracy and ease of use.

Oh, and if you find any Flickr pictures related to this blog post, tag them with adactio:post=1274.

Taking back the Web

I’m at an event called Take Back The Web. It’s a cosy little unconference aimed at non-profits and activist groups.

There’s been plenty of education and discussion going on all day, mostly around things like blogs, wikis, RSS and podcasting. I followed up the RSS talk with a little spiel about APIs and how they can be used to pull in data from other places on the web.

I’m used to attending geekier events where everyone is fairly tech-savvy, but the crowd here is mostly made up of people on the ground who want to be able use technology but who aren’t necessarily from a technological background. It really brought home to me just how far we have to go in making this stuff less geeky and scary-sounding.

Just about everyone gets blogs, and it’s pretty easy to get started with them. Wikis are a little bit trickier, but still attainable. RSS becomes harder again: it’s still too hard to subscribe, and even the term “subscribe” is itself misleading, implying payment. As for APIs, that’s still all pretty much rocket science so I just gave a basic overview of the benefits without really discussing the nitty-gritty of programming.

Notice how the terms change in complexity along that scale: from the word blog to the term API. We’re using way too many acronyms and technobabble for this stuff. Of course, we can’t change the names without upsetting the geeky programmers.

I got a lot of food for thought from the day so far, even though I already know about the technologies. It’s been fascinating to see how people are using the web now and also how much more they could be doing.

The guys from mySociety/They Work For You are talking through their services now and I’ve just found about this nifty API. I’ll have a play around with that. I’ll quiz Matthew about it later; he’s staying over with me. More grist for the bedroll.

Pictorial Ajaxitagging

I talked a while back about how I was attempting to add some extra context to my posts by pulling in corresponding tag results from Del.icio.us and Technorati, and then displaying them together through the magic of Ajax.

It struck me that there was another tag space that I had completely forgotten about: Flickr. Now at the end of any post that’s been tagged, you’ll find links entreating you to pull in any of my Flickr pics that have been likewise tagged.

This is all possible thanks to a single method of Flickr’s API. I’m reusing the same method to search for other pictures too…

A had a little epiphany in the pub the other night, chatting after the WSG meetup. I was talking about geotagging and I mentioned that it probably won’t be too long before just about every file will be geotagged in the same way that just about every file already has a time stamp. Then I realised, “hey, all my blog posts have time stamps and so do all my Flickr pics!”

So I added an extra link. You can search for any pictures of mine that were taken on the same day as a journal entry. I like the extra context that provides.

While I was testing this new functionality, I couldn’t figure out why some pictures weren’t being pulled in. Looking at the post from the Opera event written on Tuesday, I expected to be able to view the pictures I took on the same night. They weren’t showing up and I couldn’t understand why not. I assumed I was doing something wrong in the code. As it turned out, the problem was with my camera. I never reset the date and time when I came back from Australia, so all the pictures I’ve taken in the last couple of weeks have been off by a few hours.

Keep your camera’s clock updated, kids. It’s valuable metadata.

Hmmm… I guess I should take a picture today to illustrate the new functionality. In the meantime, check out this older post from BarCamp to see the Ajaxitagging in action.

Melbourne calling

My time in Melbourne is almost at an end. Thanks to everyone who sent tips on places to go and things to see here. Most of my activities, as evidenced by my Flickr pics, have revolved around food. I must get around to writing it all up on Principia Gastronomica.

I took some time out from my culinary explorations to give a talk at the local Web Standards Group meetup. It was fun. I recycled my talk from d.Construct, The Joy of API. People seemed to enjoy it and there were a lot of great questions asked afterwards.

The audio from the talk at d.Construct is now available through the podcast. I’ve had the audio transcribed — using Casting Words — and I’ve posted the results here.

Simon and Paul

Simon and Paul have finished giving their presentation and very good it was too. They covered a lot of ground in a short time but they did it in a clear, easy to follow way.

As is now mandatory, the presentation was illustrated with Flickr pics including one of mine, which I wasn’t expecting.

he guys did a good job of showing how useful APIs are from inside a huge company and from the evangelism they were doing, I expect to see Hack Days starting at other companies soon.

The talk flowed nicely into my presentation where I talked about APIs from the viewpoint of someone on the outside looking in. That’s no accident, of course: we planned the schedule that way. I think it worked out well.

API changes

If you’re using either the Flickr or Del.icio.us APIs, be aware that some changes have been to both recently.

Cal Henderson announced on the Flickr API mailing list that…

…the API endpoints have been changed from https://www.flickr.com/services/ to http://api.flickr.com/services/

The documentation will be updated by and by. If you’re making use of the Flickr API, now would be a good time to go in and rewrite those URLs. I’ve updated Adactio Elsewhere to use the new URLs. There are no plans to get rid of the old endpoints but all developers are encouraged to make the change.

Back in May, the Del.icio.us team announced that all API requests would need to go over SSL:

If the old URL was http://del.icio.us/api/posts/get, the new URL will be https://api.del.icio.us/v1/posts/get

I missed the memo so, like Dom, I was caught out by the change. On Adactio Elsewhere, I switched over to using PHP’s curl functions to retrieve the XML files and that seems to do the trick nicely.

If you’re tinkering with either API, take note of these changes.

Ajaxitagging

Ever since I switched over to a new CMS back in February, I’ve been tagging all my journal entries. Until now, I haven’t been doing anything with those tags apart from exposing them in category elements in my RSS feed. Now that I’ve got a good head of steam going with my tags, I’ve decided to play around with them a bit.

Each journal entry page now shows the tags at the end of the post. These are linked (using rel-tag of course) to an aggregate tag page that shows any other posts with the same tag. Pretty standard stuff.

But then I thought it would be fun to tie the post in with other things I’ve tagged, not on this site but on Del.icio.us. Under the heading “Related”, you’ll find links to the same tags for my del.icio.us links.

Rather then sending you off to Del.icio.us, I’m using the Del.icio.us API to bring the results back to this site. Using a bit of Ajax, these results are displayed without a page refresh. I’m using Hijax so if JavaScript is disabled, the links will still work.

I’ve got a nice little progress bar going while the request is being sent, and a bit of a colour fade happening when the response comes back. The results themselves could probably do with some more styling. Right now I’m just displaying them in a regular unordered list of x-folk entries but I think they might look nice if they were more comment-like in appearance.

After the Del.icio.us links, I’ve got the same tags pointing off to Technorati. Again, instead of sending you away, I’m pulling in the results with the Technorati API. In some ways, these results are more interesting than the del.icio.us links because, instead of just showing things that I have tagged, this shows results from everywhere. The results are constantly changing. Right now I’m using the search query, but I must look into the experimental tag query.

I’m also using the Technorati API to find any blogs that are linking to the current post. This works like Trackback. If you want to respond to a post I’ve written, just blog about it. As long as you include a link back to the post, your entry will now show up in the results. It won’t be instantaneous, but if your blogging software is set up to ping Technorati when you post, it should show up pretty fast. I’d be interested in finding out just how long it takes for the API to reflect recent pings. If you blog about this post (with a link), try coming back to it and using the Technorati link to see how long your post takes to show up.

The Technorati API isn’t the most full-featured and sometimes it just seems to not respond. The Del.icio.us API allows me to do quite a bit with my own links, but doesn’t offer any access to other peoples’. Still, by combining the two with the tags for any particular journal entry, an interesting picture emerges.

I have some other ideas for making individual journal entry pages more interesting. None of them involve the addition of buttons that invite the reader to add the page to Digg, Newsvine, Del.icio.us, Reddit, Furl, Magnolia, Blinklist, or any other others I may be forgetting.

Mashing up with microformats

Back in March, during South by Southwest, Tantek asked me if I’d like to sit in on his microformats panel alongside Chris Messina and Norm! The audio recording of the panel is now available through the conference podcast.

I’ve taken the liberty of having the recording transcribed (using castingWords.com) and I’ve posted a tidied up version of the transcript to the articles section: Microformats: Evolving the Web. You can listen along through the articles RSS feed which doubles up as a podcast.

I’ve also posted the transcript on the microformats wiki so that others can edit it if they catch any glaring mistakes in the transcription.

During the panel I talked about Adactio Austin, a fairly trivial use of microformats but one that I’ve been building upon. I’d like to provide some cut’n’paste JavaScript that would allow people to get some added value from using microformats. Supposing you have a bunch of locations marked up in hCard with geotags, you could drop in a script and have a map appear showing those locations.

Perhaps the geotagging won’t even be necessary. Google added a geocoder to their mapping API two weeks ago. The UK, alas, is not yet supported (probably because the Post Office won’t let go of its monopoly that easily… Postman Pat, your money-grabbing days are numbered).

Unfortunately, Google Maps isn’t very suited to the cut’n’paste idea: you have to register a different API key for each domain where you want to use the mapping API.

The Yahoo maps API is less draconian about registration but its lack of detailed UK maps makes it a non-starter for me.

Maybe I should step away from maps and concentrate on events instead. It probably wouldn’t be too hard too write a script to create a calendar based on any hCalendar data found in a document. Perhaps I’ll investigate the calendar widget from Yahoo.

Ultimately I’d like to create something like Chris’s Mapendar idea. If only there were enough hours in the day.