Tags: networks



Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.


On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.


In 2005 I went to South by Southwest for the first time. It was quite an experience. Not only did I get to meet lots of people with whom I had previously only interacted with online, but I also got to meet lots of lots of new people. Many of my strongest friendships today started in Austin that year.

Back before it got completely unmanageable, Southby was a great opportunity to mix up planned gatherings with serendipitous encounters. Lunchtime, for example, was often a chaotic event filled with happenstance: you could try to organise a small group to go to a specific place, but it would inevitably spiral into a much larger group going to wherever could seat that many people.

One lunchtime I found myself sitting next to a very nice gentleman and we got on to the subject of network theory. Back then I was very obsessed with small-world networks, the strength of weak ties, and all that stuff. I’m still obsessed with all that stuff today, but I managed to exorcise a lot my thoughts when I gave my 2008 dConstruct talk, The System Of The World. After giving that magnum opus, I felt like I had got a lot of network-related stuff off my chest (and off my brain).

Anyway, back in 2005 I was still voraciously reading books on the subject and I remember recommending a book to that nice man at that lunchtime gathering. I can’t even remember which book it was now—maybe Nexus by Mark Buchanan or Critical Mass by Philip Ball. In any case, I remember this guy making a note of the book for future reference.

It was only later that I realised that that “guy” was David Isenberg. Yes, that David Isenberg, author of the seminal Rise of the Stupid Network, one of the most important papers ever published about telecommunications networks in the twentieth century (you can watch—and huffduff—a talk he gave called Who will run the Internet? at the Oxford Internet Institute a few years back).

I was reminded of that lunchtime encounter from seven years ago when I was putting together a readlist of visionary articles today. The list contains:

  1. As We May Think by Vannevar Bush
  2. Information Management: A Proposal by Tim Berners-Lee (vague but exciting!)
  3. Rise of the Stupid Network by David Isenberg
  4. There’s Plenty of Room at the Bottom by Richard Feynman
  5. The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge

There are others that should be included on that list but there’s are the ones I could find in plain text or HTML rather than PDF.

Feel free to download the epub file of those five articles together and catch up on some technology history on your Kindle, iPad, iPhone or other device of your choosing.

Of Time and the Network and the Long Bet

When I went to Webstock, I prepared a new presentation called Of Time And The Network:

Our perception and measurement of time has changed as our civilisation has evolved. That change has been driven by networks, from trade routes to the internet.

I was pretty happy with how it turned out. It was a 40 minute talk that was pretty evenly split between the past and the future. The first 20 minutes spanned from 5,000 years ago to the present day. The second 20 minutes looked towards the future, first in years, then decades, and eventually in millennia. I was channeling my inner James Burke for the first half and my inner Jason Scott for the second half, when I went off on a digital preservation rant.

You can watch the video and I had the talk transcribed so you can read the whole thing.

It’s also on Huffduffer, if you’d rather listen to it.

Adactio: Articles—Of Time And The Network on Huffduffer

Webstock: Jeremy Keith

During the talk, I pointed to my prediction on the Long Bets site:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd last year (a terrible day for New Zealand). The prediction will reach fruition on 02022-02-22 …I quite like the alliteration of that date.

Here’s how I justified the prediction:

“Cool URIs don’t change” wrote Tim Berners-Lee in 01999, but link rot is the entropy of the web. The probability of a web document surviving in its original location decreases greatly over time. I suspect that even a relatively short time period (eleven years) is too long for a resource to survive.

Well, during his excellent Webstock talk Matt announced that he would accept the challenge. He writes:

Though much of the web is ephemeral in nature, now that we have surpassed the 20 year mark since the web was created and gone through several booms and busts, technology and strategies have matured to the point where keeping a site going with a stable URI system is within reach of anyone with moderate technological knowledge.

The prediction has now officially been added to the list of bets.

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

The sysadmin for the Long Bets site is watching this bet with great interest. I am, of course, treating this bet in much the same way that Paul Gilster is treating this optimistic prediction about interstellar travel: I would love to be proved wrong.

The detailed terms of the bet have been set as follows:

On February 22nd, 2022 from 00:01 UTC until 23:59 UTC,
entering the characters http://www.longbets.org/601 into the address bar of a web browser or command line tool (like curl)
using a web browser to follow a hyperlink that points to http://www.longbets.org/601
return an HTML document that still contains the following text:
“The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.”

The suspense is killing me!

The Audio of the System of the World

Four months after the curtain went down on dConstruct 2008, the final episode of the podcast of the conference has just been published. It’s the audio recording of my talk The System Of The World.

I’m very happy indeed with how the talk turned out: dense and pretentious …but in a good way, I hope. It’s certainly my favourite from the presentations I have hitherto delivered.

Feel free to:

The whole thing is licenced under a Creative Commons attribution licence. You are free—nay, encouraged—to share, copy, distribute, remix and mash up any of those files as long as you include a little attribution lovin’.

If you’ve got a Huffduffer account, feel free to huffduff it.

Back to school

When I went to the Reboot conference in Copenhagen earlier this year, I met plenty of people who were interesting, cool and just plain nice. In fact, I met half of those lovely people before I even arrived in Denmark—it was at Stansted airport, waiting for a delayed flight, that I first met Riccardo Cambiassi, Lee Bryant and David Smith.

David is a teacher at St Paul’s school in London. Lately he’s been organising an ongoing series of guest speakers to come in and talk to the students. came in and gave a talk a little while back—yes that Ted Nelson. As you can imagine then, I was simultaneously honoured and intimidated when David asked me to come along to the school to give a talk on Designing the Social Web.

Yesterday was the big day. I walked across Hammersmith bridge and stepped inside a school for the first time in almost twenty years. Despite my nervousness, I felt the talk went well. I put together some slides but they were mostly just notes for myself. I had a whole grab-bag of things I wanted to discuss and while I might have done it in a very unstructured way, I think I managed to cover most of them.

Obviously this was a very different audience than I’m used to speaking to but I really enjoyed that. It was illuminating to go straight to the source and find out how teenagers are using social networking sites. Once the talk and questions were done, we adjourned to lunch—a good old fashioned school dinner—where the discussion continued. I really enjoyed talking with such sharp, savvy young gentlemen.

It isn’t surprising that they’re all so Web-savvy; the Web has always been there for them. Thinking back on my own life, it almost seems in retrospect as if I was just waiting for the Web to come along. Maybe I was born too soon or maybe I’m just young at heart, but I found that I was able to relate very closely with these people who are half my age.

I took the opportunity to test a theory of Jeff Veen’s on the difference in generational attitudes towards open data. Given the following two statements:

  1. my data is private except what I explicitly choose to make public or
  2. my data is public except what I explicitly choose to keep private,

…the overwhelming consensus amongst the students was with the second viewpoint, which happens to be the viewpoint I share but I suspect many people my age don’t.

There were plenty of other stimulating talking points—the Facebook/Beacon debacle was a big topic. It was a great way to spend an afternoon. My thanks to David for inviting me along to the school and my thanks to the young men of St Paul’s for their graciousness in listening to me natter on about small world networks, the strength of weak ties, portable social networks and, inevitably, microformats.

Seeing as I was in London anyway, I took the tube across town to see my collaborators at New Bamboo. That meant that by the time I was leaving London, it was rush hour. Oh joy. Despite the knackering experience of the commute, I managed to stay on my feet long enough to enjoy a great gig in Brighton that evening. It was a long but very fulfilling day.


The nerdier nether-regions of blogland have been burning through the night with the news of the OpenSocial initiative spearheaded by Google and supported by what Chris so aptly calls the coalition of the willing.

Like Simon, I’ve been trying to get my head around exactly what OpenSocial is all about ever since reading Brady Forrest’s announcement. Here’s what I think is going on:

Facebook has an API that allows third parties to put applications on Facebook profile pages (substitute the word “widget” for “application” for a more accurate picture). Developers have embraced Facebook applications because, well, Facebook is so damn big. But developing an app/widget for Facebook is time-consuming enough that the prospect of rewriting the same app for a dozen other social networking sites is an unappealing prospect. That’s where OpenSocial comes in. It’s a set of conventions. If you develop to these conventions, your app can live on any of the social networking sites that support OpenSocial: LinkedIn, MySpace, Plaxo and many more.

Some of the best explanations of OpenSocial are somewhat biased, coming as they do from the people who are supporting this initiative, but they are still well worth reading:

There’s no doubt that this set of conventions built upon open standards—HTML and JavaScript—is very good for developers. They no longer have to choose what “platforms” they want to support when they’re building widgets.

That’s all well and good but frankly, I’m not very interested in making widgets, apps or whatever you want to call them. I’m interested in portable social networks.

At first glance, it looks like OpenSocial might provide a way of exporting social network relationships. From the documentation:

The People and Friends data API allows client applications to view and update People Profiles and Friend relationships using AtomPub GData APIs with a Google data schema. Your client application can request a list of a user’s Friends and query the content in an existing Profile.

But it looks like these API calls are intended for applications sitting on the host platform rather than separate sites hoping to extract contact information. As David Emery points out, this is a missed opportunity:

The problem is, however, that OpenSocial is coming at completely the wrong end of the closed-social-network problem. By far and away the biggest problem in social networking is fatigue, that to join yet another site you have to sign-up again, fill in all your likes and dislikes again and—most importantly—find all your friends again. OpenSocial doesn’t solve this, but if it had it could be truly revolutionary; if Google had gone after opening up the social graph (a term I’m not a fan of, but it seems to have stuck) then Facebook would have become much more of an irrelevance—people could go to whatever site they wanted to use, and still preserve all the interactions with their friends (the bit that really matters).

While OpenSocial is, like OAuth, a technology for developers rather than end users, it does foster a healthy atmosphere of openness that encourages social network portability. Tantek has put together a handy little table to explain how all these technologies fit together:

portabilitytechnologyprimary beneficiary
social applicationOAuth, OpenSocialdevelopers
social profilehCard users
friends listXFN users
loginOpenID users

I was initially excited that OpenSocial might be a magic bullet for portable social networks but after some research, it doesn’t look like that’s the case—it’s all about portable social widgets.

But like I said, I’m not entirely sure that I’ve really got a handle on OpenSocial so I’ll be digging deeper. I was hoping to see Patrick Chanezon talk about it at the Web 2.0 Expo in Berlin next week but, wouldn’t you know it, I’m scheduled to give a talk at exactly the same time. I hope there’ll be plenty of livebloggers taking copious notes.


In case you hadn’t noticed, I’ve got a real thing about portable social networks. And I’m not the only one. At a recent meetup in San Francisco a bunch of the Web’s finest minds got together to tackle this issue. You can track the progress (and contribute) on the microformats wiki.

Ever since then, Brian Oberkirch has been doing a sterling job documenting the issues involved:

Head on over there, read what Brian has to say and join in the conversation in the comments.

Lest you think that this is some niche itch that needs to be scratched, I can tell you from personal experience that everybody I’ve spoken to thinks that is a real issue that needs tackling. Heck, even Wired News is getting upset in the article Slap in the Facebook: It’s Time for Social Networks to Open Up:

We would like to place an open call to the web-programming community to solve this problem. We need a new framework based on open standards. Think of it as a structure that links individual sites and makes explicit social relationships, a way of defining micro social networks within the larger network of the web.

Weirdly, the same article then dismisses XFN, saying Trouble is, the data format doesn’t yet offer any tools for managing friends. That’s kind of like dismissing HTML because it doesn’t offer you a way of managing your bookmarks. XFN is a format—a really simply format. Building a tool to manage relationships would be relatively easy. But you have to have the format before you can have the tool.

Life streams and Jaiku

After I wrote about mashing up RSS feeds to create a sort of life stream, some people have taken this idea and run with it. Probably my favourite implementation is Deliciously Meta from Steve Ivy, which looks very classy. For Wordpress users, Chris J. Davis has created a plug-in. Check out his own life stream to see it in action.

Just the other day, I came across a site which allows you to create a life stream by entering a series of URLs. The site is Jaiku.

Jaiku is a Finnish competitor to Twitter—with the added benefit of a life stream thrown in. You send the site little updates of what you’re doing (via the Web or mobile) and you track what your friends are up to.

In many ways, Jaiku is superior to Twitter. It certainly looks a lot better. It feels snappier. The markup is clean. There’s also a dedicated mobile client for Nokia smart phones. All in all, it’s a slick, fun site.

And yet… simply by virtue of the fact that I discovered it after Twitter, I’m unlikely to use Jaiku as much. It all comes back to the issue of creating yet another network of friends on yet another social networking site: I don’t feel very motivated to do it and I suspect that none of my contacts on Twitter relish the prospect either.

Khoi posited the idea that the exclusivity of social networks may be a feature, not a bug. That may be true to a certain extent. On Last.fm, my criteria for adding a contact is not just my relationship with that person, but also whether or not they have crappy taste in music. On Twitter, I only add people I’ve met in real life. Perhaps I’ll end up using Jaiku for a limited subset of people I know: maybe I’ll use it just for tracking my Central European Tribe comrades.

But what I really want is to be able to take all my friends from Twitter and quickly and easily port them over to Jaiku. Alas, in the absence of hCard and XFN on Twitter, this seems unlikely. A movement in the other direction seems more likely given that Jaiku is using hCard.

Meanwhile, I could kill two birds with one stone and add my RSS feed from Twitter to my life stream on Jaiku. That way, every time I post to Twitter, it would show up on Jaiku. I wonder if that would constitute “gaming” the system?

If I wanted to game the system in a harmless but fun way, I could have some fun with the query string on Jaiku and post the results to Flickr. D’oh! They fixed it: that was fast!

More thoughts on portable social networks

I’m not the only one thinking about portable social networks:

There are some good comments on these posts ‘though I keep noticing the trend for things to get too complex too quickly. Tom Carden mentions FOAF but I have a number of issues with that:

  1. Publishing XML is hard, certainly harder than publishing HTML.
  2. Out of sight is out of mind. I’ve actually got a FOAF file here at adactio but I haven’t updated it in years. Invisible metadata rots.

A lot of people are talking about the need for some kind of centralised service (ala Gravatar) for storing a social network. But surely the last thing we need is yet another walled garden or roach motel?

I’d much prefer a distributed solution and, frankly, I wish Gravatar had gone down that route given its often slugglish ways. I realise that a centralised service is needed for people who don’t have their own URL but it should, in my opinion, be second choice rather than default.

In any case, I think we may be barking up the wrong tree with all this talk of needing something new. Personally, I don’t think the solution need be complicated it all. It’s within reach right now and it doesn’t require the creation of any new service.

Suppose, just suppose, that…

… were marked up with XFN (update: or more importantly, hCard—see below). Now all I have to do is provide one of those URLs to the next social networking site I join.

Far fetched? Two of the sites I listed are already walking the walk. All that’s needed is for the sign-up form on the next fadsite I join to at least include the option of importing a buddy list by pointing to a URL.

Sure, it won’t work perfectly. People might have different names from site to site. But that’s okay. It’ll work good enough. It will probably get 80% of my contacts imported. And that’s a lot better than the current count of zero.

We don’t need yet another centralised service. The information is already out there, it just needs to be explicitly marked up.

Once you populate a network on one site, that information should be easily portable to another site. That’s doable. It isn’t even that hard: all it requires is the addition of a few rel attributes and possibly some hCard encoding.

Let’s not go chasing a complicated solution when a simpler one will do.

So here’s my plea—nay, my demand—to the next Web X.X social networking doohickey that wants me to join up:

  1. Give me a simple input field for entering a URL that lists my contacts.
  2. Parse that URL for people and relationships.
  3. Voila! I’ve added a bunch of friends. I may repeat from step one with a different URL.
  4. Markup my contacts on your doohickey in an easily exportable way.

Who wants to get the ball rolling? Why can’t this become as ubiquitous as gradients, closed betas, giant text and wet-floor reflections?

For all the talk of social media and the strength of weak ties, there isn’t much action being taken to really try to “harness collective intelligence®”. Within the confines of their own walls, these Web X.X sites might be all about social this and social that, but I want to see more sites practice what they preach on a wider scale… the scale of the World Wide (semantic) Web.


Following on from some comments and Twitter chat, I wanted to clarify a few points:

  1. Yes, social networks differ depending on context. That’s why I want the ability to point at more than one URL. If I join up to a new music site, I might want to point to my Last.fm contacts, but not my Flickr contacts. If I join a new site about food or drink, I’d probably want to point to my Cork’d drinking buddies, but not my Linkdin network. Or I might want to point to any combination thereof: Flickr + Twitter - Last.fm, for example.

  2. The issue of whether the people you’re adding even want to be your friend is a red herring. That’s an issue regardless of portability. I can quite easily add people as my friends on Flickr who don’t want to reciprocate. The same goes for Twitter. Portability will allow me to add friends en masse but it won’t ever automatically add me as a friend to the people I’m importing: that’s still up to them.

  3. No, this won’t move 100% of contacts from network to network. But it will move a lot. My user name is adactio on Flickr, Last.fm, Twitter, Upcoming, Technorati and Cork’d. I suspect a lot of people use the same user name across sites. For sites that use real names, there’s an even greater chance of portability.

  4. None of this portability is irreversible, it’s just a shortcut. If I get false positives—people imported that I don’t want as contacts—I can just remove that relationship. Likewise if I fail to import some people automatically, I’ve still got the old-fashioned way of doing it by hand (which we all have to do now anyway).

  5. Forget about XFN for a minute. The important thing is that I’m pointing to a page and saying, “any people listed on this page are contacts I want to import.” Now, there is no <person> element in HTML so how does it know which strings are people? Well, we need some way of saying “this is a real name”, or “this is a nickname”. We have that already: class="fn" and class="nickname". These are properties of hCard. So I guess it’s hCard usage that really matters. That said, XFN can added an extra level of granularity: contact vs. friend, at least. But I stand corrected: the really important formatting issue here is marking up “who are the people on this page?” rather than “what are the relationships on this page?” The URL itself contains the information that everyone listed is a contact.

Just take a look at these URLs:

  • http://corkd.com/people/adactio/buddies/
  • https://www.flickr.com/people/adactio/contacts/
  • http://last.fm/user/adactio/friends/

A semantic consensus is already emerging across sites in URL structure:

http://site name/[people|user]/username/[buddies|contacts|friends]/

All that’s needed is to explicitly mark up any people on those pages. That’s easily done with hCard. All these sites have to do is edit a template. For extra relationship richness, XFN can help.