In order to write a history, you need evidence of what happened. When we talk about preserving the stuff we make on the web, it isn’t because we think a Facebook status update, or those GeoCities sites have such significance now. It’s because we can’t know.
When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history.
He warns of the dangers of rapidly-obsoleting file formats:
We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised.
It was a little weird that the Guardian headline refers to Vint Cerf as “Google boss”. On the BBC he’s labelled as “Google’s Vint Cerf”. Considering he’s one of the creators of the internet itself, it’s a bit like referring to Neil Armstrong as a NASA employee.
CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.
Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?
But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.
The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.
I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:
How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?
Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.
Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.
Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.
The proposed scheme provides a simple mapping between HTML elements and presentation hints.
Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.
My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.
I got chatting to Aral about a markup pattern that’s become fairly prevalent since the rise of Github: linking to the source code for a website or project. You know, like when you see “fork me on Github” links.
We were talking about how it would be nice to have some machine-readable way of explicitly marking up those kind of links, whether they’re in the head of the document, or visible in the body. Sounds like a job for the rel attribute, I thought.
The rel attribute describes the relationship of the current document to the linked document. You can use it on the link element (in the head of your document) and the a element (in the body). The example that everyone is familiar with is rel=”stylesheet” when linking off to a CSS file—the linked document has the relationship of being a stylesheet for the current document.
The rel attribute could theoretically take a space-separated list of any values, just like the class attribute. In practice, there’s much more value in having everyone agree on which rel values should be used.
The benefit of having one centralised for this is that you can see if someone else has had the same idea as you. Then you can come to agreement on which value to use, so that everyone’s using the same vocabulary instead of just making stuff up.
It doesn’t look like there’s an existing value for the use case of linking to a document’s (or a project’s) source code so I’ve proposed rel=”source”.
Basically, it’s an equivalent to pingback. Let’s say I write something here on adactio.com. Suppose that prompts you to write something in response on your own site. A web mention is a way for you to let me know that your response exists.
If you look in the head of any of my journal posts, you’ll see this link element:
That’s my web mention endpoint: http://adactio.com/webmention.php …it’s kind of like a webhook: a URL that’s intended to be hit by machines rather than people. So when you publish your response to my post, you ping that URL with a POST request that sends two parameters:
target: the URL of my post and
source: the URL of your response.
Ideally your own CMS or blogging system would take care of doing the pinging, but until that’s more widely implemented, I’m providing this form at the end of each of my posts:
Either way, once you ping my web mention endpoint—discoverable through that link rel="webmention"—with those two parameters, I just need to confirm that your post does indeed contain a link to my post—by making a cURL request and parsing your source—and then I return a server response of 202 (Accepted).
That’s as far as I got at Indie Web Camp but it was enough for me to start collecting responses to posts.
The next step is to do something with the responses. After all, I’ve already got the source of each response from those cURL requests.
Barnaby has a written a nice straightforward microformats parser in PHP. I’m using that to check the cURLed source for any responses that have been marked up using h-entry. That’s one of the microformats 2 vocabularies—a much simpler way of writing structured content with microformats.
So there you have it. Comments are now open on every journal post on adactio.com …the only catch is that you have to write the comment on your own site. And if you want the content of your post to appear here (instead of just a link) then update your blog post template to include a handful of h-entry classes.
Feel free to use this post as a test. Mark up your blog with h-entry, write a post that links to this URL, and enter the URL of your post in the form below.
A recent simplequiz over on HTML5 Doctor threw up some interesting semantic issues. Although the figure element wasn’t the main focus of the article, a lot of the comments were concerned with how it could and should be used.
The element can thus be used to annotate illustrations, diagrams, photos, code listings, etc, that are referred to from the main content of the document, but that could, without affecting the flow of the document, be moved away from that primary content, e.g. to the side of the page, to dedicated pages, or to an appendix.
Steve and Bruce have been campaigning on the HTML mailing list to get the wording updated and clarified.
Meanwhile, in an unrelated semantic question, there was another HTML5 Doctor article a while back about quoting and citing with blockquote and its ilk.
<blockquote>It is the unofficial force—the Baker Street irregulars.</blockquote>
<figcaption>Sherlock Holmes, <cite>Sign of Four</cite></figcaption>
Although, unsurprisingly, I still take issue with the decision in HTML5 not to allow the cite element to apply to people. As I’ve said before we don’t have to accept this restriction:
Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element.
In which case, we get this nice little pattern combining figure, blockquote, cite, and the hCard microformat, like this:
<blockquote>It is the unofficial force—the Baker Street irregulars.</blockquote>
<figcaption class="vcard"><cite class="fn">Sherlock Holmes</cite>, <cite>Sign of Four</cite></figcaption>
Or like this:
<blockquote>Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element.</blockquote>
<figcaption class="vcard"><cite class="fn">Jeremy Keith</cite>, <a href="http://24ways.org/2009/incite-a-riot/"><cite>Incite A Riot</cite></a></figcaption>
There is much hand-wringing in the media about the impending death of journalism, usually blamed on the rise of the web or more specifically bloggers. I’m sympathetic to their plight, but sometimes journalists are their own worst enemy, especially when they publish badly-researched articles that fuel moral panic with little regard for facts (if you’ve ever been in a newspaper article yourself, you’ll know that you’re lucky if they manage to spell your name right).
Exhibit A: an article published in The Guardian called How I became a Foursquare cyberstalker. Actually, the article isn’t nearly as bad as the comments, which take ignorance and narrow-mindedness to a new level.
Fortunately Ben is on hand to set the record straight. He wrote Concerning Foursquare and communicating privacy. Far from being a lesser form of writing, this blog post is more accurate than the article it is referencing, helping to balance the situation with a different perspective …and a nice big dollop of facts and research. Ben is actually quite kind to The Guardian article but, in my opinion, his own piece is more interesting and thoughtful.
Exhibit B: an article by Jeffrey Rosen in The New York Times called The Web Means the End of Forgetting. That’s a bold title. It’s also completely unsupported by the contents of the article. The article contains anecdotes about people getting into trouble about something they put on the web, and—even though the consequences for that action played out in the present—he talks about the permanent memory bank of the Web and writes:
The fact that the Internet never seems to forget is threatening, at an almost existential level, our ability to control our identities.
Bollocks. Or, to use the terminology of Wikipedia, citation needed.
Rosen presents his premise — that information once posted to the Web is permanent and indelible — as a given. But it’s highly debatable. In the near future, we are, I’d argue, far more likely to find ourselves trying to cope with the opposite problem: the Web “forgets” far too easily.
Exactly! I get irate whenever I hear the truism that the web never forgets presented without any supporting data. It’s right up there with eskimos have fifty words for snow and people in the middle ages thought that the world was flat. These falsehoods are irritating at best. At worst, as is the case with the myth of the never-forgetting web, the lie is downright dangerous. As Rosenberg puts it:
I’m a lot less worried about the Web that never forgets than I am about the Web that can’t remember.
That’s a real problem. And yet there’s no moral panic about the very real threat that, once digitised, our culture could be in more danger of being destroyed. I guess that story doesn’t sell papers.
This problem has a number of thorns. At the most basic level, there’s the issue of link rot. I love the fact that the web makes it so easy for people to publish anything they want. I love that anybody else can easily link to what has been published. I hope that the people doing the publishing consider the commitment they are making by putting a linkable resource on the web.
Domain names aren’t bought, they are rented. Nobody owns domain names, except ICANN.
I’m not saying that we should ditch domain names. But there’s something fundamentally flawed about a system that thinks about domain names in time periods as short as a year or two.
Then there’s the fact that so much of our data is entrusted to third-party sites. There’s no guarantee that those third-party sites give a rat’s ass about the long-term future of our data. Quite the opposite. The callous destruction of Geocities by Yahoo is a testament to how little our hopes and dreams mean to a company concerned with the bottom line.
We can host our own data but that isn’t quite as easy as it should be. And even with the best of intentions, it’s possible to have the canonical copies wiped from the web by accident. I’m very happy to see services like Vaultpress come on the scene:
Your WordPress site or blog is your connection to the world. But hosting issues, server errors, and hackers can wipe out in seconds what took years to build. VaultPress is here to protect what’s most important to you.
We need one or more institutions that can manage electronic trusts over very long periods of time.
The institutions need to be long-lived and have the technical know-how to manage static archives. The organizations should need the service themselves, so they would be likely to advance the art over time. And the cost should be minimized, so that the most people could do it.
It’s what my technology friends call a non-trivial task, for all kinds of technical, social and legal reasons. But it’s about as important for our future as anything I can imagine. We are creating vast amounts of information, and a lot of it is not just worth preserving but downright essential to save.
There’s an even longer-term problem with digital preservation. The very formats that we use to store our most treasured memories can become obsolete over time. This goes to the very heart of why standards such as HTML—the format I’m betting on—are so important.
Their plan involves the storage, not just of data, but of data formats such as JPEG and PDF: the equivalent of a Rosetta stone for our current age. A box containing format-decoding documentation has been buried in a bunker under the Swiss Alps. That’s a good start.
As proved by the destruction of the Alexandria Library and of the literature of Mayans and Minoans, “knowledge is hard won but easily lost.”
I’m worried that we’re spending less and less time thinking about the long-term future of our data, our culture, and ultimately, our civilisation. Currently we are preoccupied with the real-time web: Twitter, Foursquare, Facebook …all services concerned with what’s happening right here, right now. The Long Now Foundation and Tau Zero Foundation offer a much-needed sense of perspective.
As with that other great challenge of our time—the alteration of our biosphere through climate change—the first step to confronting the destruction of our collective digital knowledge must be to think in terms greater than the local and the present.
The latest Clearleft offering is Workshops for the Web. It made sense to move our workshop offerings out of the Clearleft site—where they were kind of distracting from the main message of the company—and give them their own home, just like our other events, dConstruct and UX London.
As well as the range of workshops that can be booked privately at any time, there’s a schedule of upcoming public workshops for 2010:
The next workshop, CSS3 Wizardry with Rich and Nat, promises to be packed full of cutting-edge front-end techniques. Book a place if you want to have CSS3 kung-fu injected into your brainstem.
I’m pretty pleased with how the site turned out. When I began designing it initially, I thought I would give it a sort of Russian constructivist feeling: the title Workshops for the Web made me think of an international workers movement. I started researching political propaganda posters, beginning with the book Revolutionary Tides.
This was when Jon was working as an intern at Clearleft. I enlisted his help in brainstorming some ideas and he came up with some great stuff—like using Soviet space-race imagery—and we played around with proof-of-concept ideas for creating diagonal backgrounds using CSS3 transforms.
But it never really came together for me. Much as I loved the Russian constructivist propaganda angle, I ditched it and started from scratch.
I scribbled down a page description diagram describing what the site needed to communicate in order of importance:
The name of the site.
A positioning statement.
The next workshop.
Other upcoming workshops.
A list of all workshops available.
A way of getting in touch.
The hierarchy for an individual workshop page looked pretty similar:
The title of the workshop.
The date of the workshop.
The location of the workshop.
The price of the workshop.
Details of the workshop.
It was clear that the page needed to quickly answer some basic questions: what? where? how much?
I started marking up the answers to those questions from top to bottom. That’s when it started to come together. Working with markup and CSS in the browser felt more productive than any of the sketching I had done in Photoshop. I started really sweating the typography …to the extent that I decided that even the logotype should be created with “live” text rather than an image.
From the start, I knew that I wanted the site to be a self-describing example of the technologies taught in the workshops. The site is built in HTML5, making good use of the new structural elements and the powerful outline algorithm. Marking up an events site with the hCalendar microformat was a no-brainer. There are hCards a-plenty too.
CSS3 nth-child selectors came in very handy and media queries are, quite simply, the bee’s knees when it comes to building a flexible site: just a few declarations allowed me to make sure the liquid layout could be optimised for different ranges of viewport size.
Given the audience of the site, I could be fairly certain that Internet Explorer 6 wouldn’t be much of a hindrance. As it turns out, everything looks more or less okay even in that crappy browser. It looks different, of course, but then do websites need to look exactly the same in every browser?
Right before launch, Paul took a shot at tweaking the visual design, adding a bit more contrast and separation on the homepage with some horizontal banding. That’s a visual element that I had been subconsciously avoiding, probably because it’s already used on some of our other sites, but once it was added, it helped to emphasise the next upcoming workshop—the main purpose of the homepage.
Just because the site is live now doesn’t mean that I’ll stop working on it. I’d like to keep tweaking and evolving it. Maybe I’ll finally figure out a way of incorporating some elements of those great propaganda posters.
Google announced that it was following in the footsteps of Yahoo’s SearchMonkey in indexing microformats and RDFa to display in search results. For now, it’s a subset of microformats—hCard and hReview—on a subset of websites, including the newly microformated Yelp. The list of approved sites will increase over time so if you’re already publishing structured contact and review information, let Google know about it.
The what now? I hear you ask. Well, if you’ve been feeling hampered by the combination of the datetime and abbr design patterns, the value class pattern offers a few alternatives.
To my mind, that’s one of the greatest strengths of the value class pattern: it doesn’t offer one alternative, it allows authors to choose how they want to mark up their content. I think that’s one of the reasons why datetime values have proven to be such a sticking point up ‘till now. Concerns about semantics and accessibility really come down to the fact that, as an author, you had very little choice in how you could mark up a datetime value.
You could either present the datetime between the opening and closing tags of whatever element you were using:
…or you could put the value in the title attribute of the abbr element:
<abbr class="dtstart" title="2009-06-05T20:00:00">
Friday, June 5th at 8pm
Those were your only options.
But now, with the value class pattern, all of the following are possible:
<abbr class="value" title="2009-06-05">
Friday, June 5th
<abbr class="value" title="20:00">
<span class="value-title" title="2009-06-05T20:00:00">
Friday, June 5th at 8pm
<span class="value-title" title="2009-06-05">
Friday, June 5th
<span class="value-title" title="20:00">
<span class="value-title" title="2009-06-05T20:00:00"> </span>
Friday, June 5th at 8pm
Personally, I’ll probably use the first example. I like the idea of splitting up the date and time portions of a datetime value. I think there’s a big difference between putting a date string into the title attribute of an abbr element and putting a datetime string in there. In the past, when I argued that having an ISO date value in an abbreviation was semantic, accessible and internationalised, Mike Davies rightly accused me of using a strawman—the issue wasn’t about dates, it was about datetimes. That’s why I created the date design pattern page on the microformats wiki; to disambiguate it as a subset of the larger datetime design pattern.
Now, others might think that even using dates in combination with the abbr design pattern is semantically dodgy. That’s fine. They now have some other options they can use, thanks to the value-title subset of the value class pattern. Me? I don’t see myself using that. I’m especially not keen on the option to use an empty element. But I’m perfectly happy for other authors to go ahead and use that option. When it comes to writing, there are often no right or wrong answers, just personal preferences. That’s true whether it’s English, HTML, or any other language. As long as you use correct syntax and grammar, the details are up to you. You can choose semicolons or em-dashes when you’re writing English. You can choose abbr or value-title when you’re writing microformats.
The wiki page for the value class pattern doesn’t just list the options available to authors. It also explains them. That’s just as important. Head over there and read the document. I think you’ll agree that it’s an excellent example of clear, methodical writing.
The microformats wiki needs more pages like that. One of the biggest challenges facing microformats isn’t any particular technical problem; it’s trying to explain to willing HTML authors how to get up and running with microformats. Given Google’s recent announcement, there’ll probably be even more eager authors showing up, looking to sprinkle some extra semantics into their markup. We’ll be hanging out in the IRC channel, ready to answer any questions people might have, but I wish the wiki were laid out in a more self-explanatory way.
In the face of that challenge, the page for the value class pattern leads by example. Ben and Tantek have done a fantastic job. And it wouldn’t have been possible without the help and support of Bruce, James and Derek, those magnanimous giants of the accessibility community who offered help, support and data.
I’m in Seattle. Dopplr tells me that Bobbie is showing up in Seattle on the last day of my visit. I send Bobbie a direct message on Twitter. He tells me the name of the hotel he’ll be staying at.
I use Google Maps to find the exact address. All addresses on Google Maps are marked up with hCard. I press the microformats bookmarklet in my bookmarks bar to download the converted vcard into my address book. Thanks to MobileMe, my updated address book is soon in the cloud . My iPod Touch gets the updated information within moments.
I go to the address. I meet Bobbie. We have coffee. We have a chat.
The World Wide Web is a beautiful piece of social software.
the XSL transformation is done by Amazon, not me; that wouldn’t be the case if I used XML-RPC.
Anyway, having successfully created a Huffduffer-Amazon bridge using machine tags, I thought I’d do a little more hacking. Instead of restricting the mashup love to Amazon, I figured that Last.fm would be the perfect place to pull in information for anything tagged with the music namespace.
Last.fm has quite a full-featured API and yes, it can output JSON. To start with, I’m using the artist.getInfo method for anything tagged with music:artist=..., music:singer=... or music:band=.... Here are some examples:
I’m pulling a summary of the artist’s bio, a list of similar artists and a picture of the artist in question. For maximum effect, view in Safari, the browser with the finest implementation of CSS3’s box-shadow property.
Nice as Last.fm’s API is, it’s not without its quirks. Like most APIs, the methods are divided into those that require authentication (anything of a sensitive nature) and those that don’t (publicly available information). The method user.getInfo requires authentication. Yet, every piece of information returned by that method is available on the public profile.
So when I wanted to find a Last.fm user’s profile picture—having figured out through Google’s Social Graph API when someone on Huffduffer has a Last.fm account—it made far more sense for me to use hKit to parse the microformatted public URL than to use the API method.
The Last.fm hack day took place in London yesterday. Much nerdy fun was had by all and some very cool hacks were produced.
Nigel made a neat USB-powered arduino-driven ambient signifier à la availibot that lights up when one of your friends is listening to music. Matt made Songcolours which takes your recently listened-to music, passes the songs through LyricWiki, extracts words that are colours, passes them through the Google chart API and generates a sentence of cut up lyrics (Hannah’s was the best: love drunk home fuck good night). The winning hack, Staff Wars, is a Last.fm-powered quiz that allows people to battle for control of the office stereo—something that could prove very useful at Clearleft.
I knew I’d never be able to compete with the l33t hax0rs in attendance, so I cobbled together a very quick little hack to enhance Huffduffer. I hacked it together fairly quickly which gave me some time to hang out with Hannah in the tragically hip environs of Shoreditch. My hack has one interesting distinguishing feature: it doesn’t make use of the API. Instead, it uses two simpler technologies: microformats and RSS.
Microformats. User profiles on Last.fm are marked up with hCard. If a URL is provided, the user profile also makes use of the most powerful value of XFN: rel="me". If that URL also links back to the Last.fm profile with rel="me"—even if in a roundabout way—that reciprocal link will be picked by Google’s Social Graph API. I’m already making use of that API on Huffduffer to display links to other profiles under the heading Elsewhere. So if someone provides a URL when they sign up to Huffduffer and they’re linking to their social network profiles, I can find out if they use Last.fm and what their username is. The URL structure of user profiles is consistent: http://www.last.fm/user/USERNAME.
RSS. Last.fm provides users with a list of recommended free MP3s. This list is also provided as RSS. More specifically, the RSS feed is a podcast. After all, a podcast is nothing more than an RSS feed that uses enclosures. The URL structure of these podcasts is consistent: http://ws.audioscrobbler.com/2.0/user/USERNAME/podcast.rss.
So if, thanks to magic of XFN, I can figure out someone’s Last.fm username, it’s a simple matter to pull in their recommended music podcast. I’m pulling in the latest three recommended MP3s and displaying them on Huffduffer user profiles under the heading Last.fm recommends. You can see it in action on my Huffduffer profile or the profiles of any other good social citizens like Richard, Tom or Brian.
This isn’t the first little Huffduffer hack I’ve built on top of the Social Graph API. If a Huffduffer user has a Flickr account, their Flickr profile picture is displayed on their Huffduffer profile. When I get some time, I need to expand this little hack to also check for Twitter profiles and grab the profile picture from there as a fallback.
None of these little enhancements are essential features but I like the idea of rewarding people on Huffduffer for their activity on other sites. Ideally I’d like to have Huffduffer’s recommendation engine being partially driven by relationships on third-party sites. So your user profile might suggest something like, You should listen to this because so-and-so huffduffed it; you know one another on Twitter, Flickr, Last.fm…
The microformats meetup in San Francisco after An Event Apart had quite a turnout. The gathering was spoiled only by Jenn getting her purse stolen. Two evenings earlier, Noel had been robbed at gunpoint. San Francisco wasn’t exactly showing its best side.
Still, the microformats meetup was a pleasant get-together. Matthew Levine pulled out his laptop and gave me a demo of the Lazy Web in action…
This is just a small subset of all the properties available in hCard so it isn’t suitable for detailed hCards. If you’re creating the markup for a contact page, for example, you’d be better off with the hCard-o-matic. But this little bookmarklet easily hits 80% of the use cases for adding hCards within body text (like in a blog post, for example).
This is a first release and there will inevitably be improvements. The ability to add XFN values would be a real boon. Still… that’s really impressive work for something that was knocked together so quickly.
If you want to use the bookmarklet (regardless of what blogging engine or CMS you use), drag this to your bookmarks bar:
Open Tech was fun. It was like a more structured version of BarCamp: the schedule was planned in advance and there was a nominal entrance fee of £5 but apart from that, it was pretty much OpenCamp. Most of the talks were twenty minutes long, grouped into hour-long thematically linked trilogies.
Things kicked off with a three way attack by Kim Plowright, Simon Wardley and Matt Webb. I particularly enjoyed Matt’s stroll down the memory lane of the birth of cybernetics. Alas, the fact that I stayed to enjoy this history lesson meant that I missed David Hayes’s introduction to Edenbee. But I did stick around for the next set of environment-related talks including a demo of the Wattson from DIY Kyoto and the always-excellent Gavin Starks of AMEE fame.
After a pub lunch spent being entertained by Ewan Spence’s thoroughly researched plan for a muppet remake of Star Wars, I made it back in time for a well-connected burst of talks from Simon, Gavin and Paul. Simon pimped OpenID. Gavin delivered a healthy dose of perspective from the h’internet. Paul ranted about the technologies depicted in his wonderful illustration entitled The Web is Agreement.
My talk at Open Tech was a reprise of my XTech presentation, Creating Portable Social Networks With Microformats although the title on the schedule was Publishing With Microformats. I figured that the Open Tech audience would be fairly advanced so I decided against my original plan of doing an introductory level talk. The social network portability angle also tied in with quite a few other talks on the day.
I shared my slot with Jeni Tennison who gave a hands-on look at RDFa at the London Gazette. The two talks complemented each other well… just like microformats and RDFa. As Jeni said, microformats are great for doing the easy stuff—the low-hanging fruit—and deliberately avoid more complex data structures: they hit 80% of the use cases with 20% of the effort. RDFa, on the other hand, can handle greater complexity but with a higher learning curve. RDFa covers the other 20% of use cases but with 80% effort. Jeni’s case study was the perfect example. Whereas as I had been showing the simple patterns of user profiles and relationships on social networks (easily encoded with hCard and XFN), she was dealing with a very specific data set that required its own ontology.
I was chatting with Dan at the start of Open Tech about this relationship. We’re both pretty fed up with the technologies being set up as somehow being rivals. Personally, I’m very happy that RDFa covers the kind of data structures that microformats doesn’t touch. When someone comes to the microformats community with an idea for a complex data format, it’s handy to have another technology to point them to. If you’re dealing with simple, common structures that have aggregate benefit like contact details, events and reviews, microformats are the perfect fit. But if you’re dealing with more complex structures—and I’m thinking here about museum collections, libraries and laboratories—chances are that some flavour of RDF is going to be more suitable.
Jeni and I briefly discussed whether we should set up our talks as a kind of mock battle. But that kind of rivalry, even when it’s done in a jokey fashion, is unnecessary and frankly, more than a little bit dispiriting. It’s more constructive to talk about real-world use cases. On that basis, I think our Open Tech presentations hit the right note.
I put together an hCalendar schedule for Open Tech so if you’re going along, you might want to subscribe. I recommend subscribing over downloading as the schedule is likely to change. I’ll do my best to update the hCalendar document accordingly. Depending on the WiFi situation and how knackered I am after the early start from Brighton, I may try to do some liveblogging.
I enjoyed being back in Ireland. Jessica and I arrived into Dublin last Saturday but went straight from the airport to the train station so that we could spend the weekend in my hometown seeing family and friends. Said town was somewhat overwhelmed by the arrival of one of the largest cruise ships in the world.
We were back in Dublin in plenty of time for the start of this year’s XTech conference. A good time was had by the übergeeks gathered in the salubrious surroundings of a newly-opened hotel in the heart of Ireland’s capital. This was my third XTech and it had much the same feel as the previous two I’ve attended: very techy but nice and cosy. In some ways it resembles a BarCamp (but with a heftier price tag). The talks are held in fairly intimate rooms that lend themselves well to participation and discussion.
I didn’t try to attend every talk — an impossible task anyway given the triple-track nature of the schedule — but I did my damndest to liveblog the talks I did attend:
There were a number of emergent themes around social networks and portability. There was plenty of SemWeb stuff which finally seems to be moving from the theoretical to the practical. And once again the importance of XMPP, first impressed upon me at the Social Graph Foo Camp, was once again made clear.
Amongst all these high-level technical talks, I gave a presentation that was ludicrously simple and simplistic: Creating Portable Social Networks with Microformats. To be honest, I could have delivered the talk in 60 seconds: Add rel="me" to these links, add rel="contact" to those links, and that’s it. If you’re interested, you can download a PDF of the presentation including notes.
I made an attempt to record my talk using Audio Hijack. It seems to have worked okay so I’ll set about getting that audio file transcribed. The audio includes an unusual gap at around the four minute mark, just as I was hitting my stride. This was the point when Aral came into the room and very gravely told me that he needed me to come out into the corridor for an important message. I feared the worst. I was almost relieved when I was confronted by a group of geeks who proceeded to break into song. You can guess what the song was.
I’m in San Diego for Jared’s Web App Summit. It’s my first time here and I find myself quite won over by the city’s charm. It’s a shiny sparkly kind of place.
The conference kicked off with a day of workshops. I should have tried to gatecrash Luke or Indy’s sessions but with the weather being so nice, I bunked off with Derek, Keith and Cindy to venture across the water from Coronado to explore the city. With no plan in mind, we found our path took us to the USS Midway, now a floating museum. We spent the rest of the afternoon geeking out over planes and naval equipment.
I got my talk about Ajax design challenges out of the way yesterday. It seemed to go pretty well. It might have been a little bit too techy for some of the audience here but I’ve received some very nice comments from a lot of people. As usual, the presentation is licensed under a Creative Commons attribution license. Feel free to download the slides but the usual caveat applies: the slides don’t make all that much sense in isolation.
With that out of the way, I was able to relax and enjoy the rest of the day. The highlight for me was listening to Bill Scott talk about interaction anti-patterns. I found myself nodding vigourously in agreement with his research and recommendations. But I must join in the clamour of voices calling for Bill to put this stuff online somewhere. I would love to have a URL I could point to next time I’m arguing against adding borked behaviour to a web app.
The conference continues today. Jason Fried kicked off the day’s talks and Keith and Derek will be in the spotlight later on (it’s always convenient when Derek is on the same bill as me because I can fob off all the Ajax accessibility issues on him).
Before making the long journey back to the UK I’ve got a social event I’m looking forward to attending. There’s a microformats dinner tonight—Tantek is in town too for a CSS Working Group meetup. Come along to Gateway to India at 9520 Black Mountain Road if you’re in San Diego. We can combine a vegetarian Indian buffet with semantic geekery.
My trip to MIX08 was also my first visit to Las Vegas. I’m sure I’m not the first place to make this observation but may I just say: what an odd place!
I experienced first-hand what Dan was talking about in his presentation Learning Interaction Design from Las Vegas. In getting from A to B, for any value of A and any value of B, all routes lead through the casino floor; the smoky, smoky casino floor. If it wasn’t for the fact that I had to hunt down an Apple Store to try to deal with my broken Macbook—more on that later—I wouldn’t have stepped outside the hotel/conference venue for the duration of my stay. Also, from the perspective of only seeing The Strip, visiting Las Vegas was like Children of Men or a bizarro version of Logan’s Run.
But enough on the locale, what about the event? Well, it was certainly quite different to South by Southwest. Southby is full of geeks, MIX was full of nerds. Now I understand the difference.
I was there to hear about Internet Explorer 8. Sure enough, right after some introductory remarks from Ray Ozzie, the keynote presentation included a slot for Dean Hachamovitch to showcase new features and announce the first beta release. I then had to endure three hours of Silverlight demos but I was fortunate enough to be sitting next to PPK so I spent most of the time leaning over his laptop while he put the beta through its paces.
After the keynote, Chris Wilson gave a talk wherein he ran through all the new features. It goes without saying that the most important “feature” is that the version targeting default behaviour is now fixed: IE8 will behave as IE8 by default. I am, of course, ecstatic about this and I conveyed my happiness to Chris and anyone else who would listen.
IE8 is aiming for full CSS2.1 support. Don’t expect any CSS3 treats: Chris said that the philosophy behind choosing which standards to support was to go for the standards that are finished. That makes a lot of sense. But then this attitude is somewhat contradicted by the inclusion of some HTML5 features. Not that I’m complaining: URL hash updates (for bookmarking) and offline storage are very welcome additions for anyone doing any Ajax work.
Overall IE8 is still going to be a laggard compared to Firefox, Safari and Opera when it comes to standards but I’m very encouraged by the attitude that the team are taking. Web standards are the star by which they will steer their course. That’s good for everyone. And please remember, the version available now is very much a beta release so don’t get too discouraged by any initial breakage.
I’m less happy about the closed nature of the development process at Microsoft. Despite Molly’s superheroic efforts in encouraging more transparency, there were a number of announcements that I wish hadn’t been surprises. Anne Van Kesterenoutlines some issues, most of them related to Microsoft’s continued insistence on ignoring existing work in favour of reinventing the wheel. The new XDomainRequest Object is the most egregious example of ignoring existing community efforts. Anne also some issues with IE’s implementation of ARIA but for me personally, that’s outweighed by the sheer joy of seeing ARIA supported at all: a very, very welcome development that creates a solid baseline of support (you can start taking bets now on how long it will take to make it into a nightly build of WebKit, the last bulwark).
The new WebSlices technology is based heavily on hAtom. Fair play to Microsoft: not once do they refer to their “hSlice” set of class names as a microformat. It’s clear that they’ve been paying close attention to the microformats community, right down to the licensing: I never thought I’d hear a Microsoft keynote in which technology was released under a Creative Commons Public Domain license. Seeing as they are well aware of microformats, I asked Chris why they didn’t include native support for hCard and hCalendar. This would be a chance for Internet Explorer to actually leapfrog Firefox. Instead of copying (see the Firebug clone they’ve built for debugging), here was an opportunity to take advantage of the fact that Mozilla have dropped the ball: they promised native support for microformats in Firefox 3 but they are now reneging on that promise. Chris’s response was that the user experience would be too inconsistent. Using the tried and tested “my mom” test, Chris explained that his mom would wonder why only some events and contact details were exportable but not others. But surely that also applies to WebSlices? The number of WebSlices on the Web right now is close to zero. Microsoft are hoping to increase that number by building in a WebSlice parser into their browser; if they had taken the same attitude with hCard and hCalendar, they themselves could have helped break the chicken’n’egg cycle by encouraging more microformat deployment through native browser support.
Overall though, I’m very happy with the direction that Internet Explorer is taking even if, like John, I have some implementation quibbles.
Having experienced a big Microsoft event first-hand, I still don’t know whether to be optimistic or pessimistic about the company. I get the impression that there are really two Microsofts. There’s Ray Ozzie’s Microsoft. He’s a geek. He gets developers. He understands technology and users. Then there’s Steve Ballmer’s Microsoft. He’s an old-school businessman in the mold of Scrooge McDuck. If Ray Ozzie is calling the shots, then there is reason to be hopeful for the future. If the buck stops with Steve Ballmer however, Microsoft is f**ked.
A pleasant Saturday afternoon of tea and burlesque was followed by a pleasant Saturday evening of cocktails, conversation and Guitar Hero at Andy’s flatwarming party. The party went on rather late which meant that I didn’t get a very early start on Sunday. I did, however, manage to convince Ben, Patrick and Frances to stay down in Brighton instead of running off early for the last train back to London so now I’ve got some new rel="friend met colleague" values added to my bedroll.
But the highlight of the event was the latest creation of Jon Linklater-Johnson: Semantopoly. Imagine a game of Monopoly where all the pieces are Happy Webbies, all the properties are websites or technologies and the currency is friends rather than money. Twitterable remarks were flowing a-plenty:
Andy Clarke uses Cindy Li’s CSS and looses 450 friends. He has to take Safari AND Facebook offline to do it. Facebook is offline oh noes!
Ah, what larks! Nicely done, Jon. And nicely done, Tom for organising a most enjoyable Semantic Camp… even if I did miss most of it. I blame Andy’s l33t Margarita skillz.
Of course, you might decide to write your own CSS. In that case, please consider also sharing it under a similar licence. It would be nice to gather together a whole range of possible style sheets for hCalendar schedules and list them on the microformats wiki.