Tags: links

15

sparkline

Research on evaluating technology

I’ve spent the past few months preparing a new talk for An Event Apart San Francisco (and hopefully some more AEAs after that). As always happens, I spent the whole time vacillating between thinking “this is good!” and thinking “this is awful!” I’m still bouncing between those poles. I won’t really know whether the talk is up to snuff until I actually give it to a live audience.

Over the past few years, my presentations have been upon one another. Two years ago, my talk was called Enhance! and it set the groundwork for using a layered approach to web design and development. My 2016 talk, Resilience, follows on with a process and examples for that approach (I also set myself the challenge of delivering a talk about progressive enhancement without ever using the phrase “progressive enhancement”).

My new talk goes a bit meta, but in my mind, it’s very much building on the previous talks. The talk is all about evaluating technology. I haven’t settled on a final title, but I was thinking about something obtuse, like …Evaluating Technology.

Here’s my hastily scribbled description:

We work with technology every day. And every day it seems like there’s more and more technology to understand: graphic design tools, build tools, frameworks and libraries, not to mention new HTML, CSS and JavaScript features landing in browsers. How should we best choose which technologies to invest our time in? When we decide to weigh up the technology choices that confront us, what are the best criteria for doing that? This talk will help you evaluate tools and technologies in a way that best benefits the people who use the websites that we are designing and developing. Let’s take a look at some of the hottest new web technologies like service workers and web components. Together we will dig beneath the hype to find out whether they will really change life on the web for the better.

As ever, I’ll begin and end with a long-zoom pretentious arc of history, but I’ll dive into practical stuff in the middle. That’s become a bit of a cliché for my presentations, but the formula works as a sort of microcosm of a good conference—a mixture of the inspirational and the practical, trying to keep a good balance of both.

For this new talk, the practical focus will be on some web technologies that are riding high on the hype cycle right now: service workers, web components, progressive web apps. I’ll use them as a lens for applying broader questions about how we make decisions about the technologies we embrace, and the technologies we reject.

Technology. Now there’s a big subject. It’s literally the entirety of human history. I had to be careful not to go down too many rabbit holes. I’m still not sure if I’ve succeeded, but I’ve already had to ruthlessly cull some darlings.

One of the nice things that the An Event Apart crew started doing was to provide link lists for each talk to attendees. That gives me an opportunity to touch briefly on a topic in the talk itself, but allow any interested attendees to dive deeper at their leisure.

For this talk on evaluating technology, I’ve put together this list of hyperlinks for further reading, watching, listening, and researching…

People

Papers

Presentations

Books

Someone will read this

After Responsive Field Day I had the chance to spend some extra time in Portland. I was staying with one Andy, with occasional welcome opportunities to hang out with the other Andy.

Over an artisanal, hand-crafted, free-range lunch one day, I took a moment to thank Andy B. I thanked him for a link. Links are very much his stock-in-trade, but there was one in particular that he had shared which stuck in my soul.

It started when he offered a bribe for a good link:

Paul Thompson won the bounty:

The link was to a page on Tilde Town, one of the many old-school web rings set up in the spirit of Paul Ford’s Tilde Club. The owner of this page had taken it upon himself to perform a really interesting—and surprisingly moving—experiment:

  1. Find blog posts where people have written “no one will ever read this”, and
  2. Read them aloud.

I’ve written before about how powerful the sound of a human voice can be. There was something about hearing these posts—which were written with a resigned acceptance of indifference—being given the time and respect to be read aloud. I listened to every single one, sometimes bemused, sometimes horrified, always fascinated.

You should listen to all of them too. They deserve it.

One in particular haunted me. It was written in 2008. After listening to it, I had to know more. I felt creepy and voyeuristic, but I transcribed a sentence from the audio file and pasted it in to Google.

I found her blog on the old my-diary.org domain. She only wrote nine entries in total. Her last one was in November 2009.

That was six years ago. I wonder how things turned out for her. I wonder if life got better for her when she left her teenage years behind. I wonder if she ever found peace.

I hope she’s okay.

Links from a talk

I’m coming to a rest after a busy period of travelling and speaking. In the last five or six weeks I’ve been to Copenhagen, Freiburg, Prague, Portland, Seattle, and Austin.

The trip to Austin was lovely. It was so nice to be there when it wasn’t South by Southwest (the infrastructure of the whole town creaks under the sheer weight of the event). I wasn’t just there to eat tacos and drink beer in the sunshine. I was there to talk at An Event Apart.

Like I said months before the event:

Everyone in the line up is one of my heroes.

It was, as always, a great event. A personal highlight for me was getting to meet Lara Hogan for the first time. She was kind enough to sign my copy of her fantastic book. She gave an equally fantastic talk at the conference, featuring some of the most deftly-handled Q&A I’ve ever seen.

I spoke at the end of the conference (no pressure!), giving a brand new talk called Resilience—I gave a shortened version at Coldfront and Smashing Conference but this was my first chance to go all out with an hour long talk. It was my chance to go full James Burke.

I assembled some related links for the attendees. Here they are…

Books

References

Resources

Related posts on adactio.com

Here’s a readlist of those links.

Further reading

Here’s a readlist of those links.

See also: other links tagged with “progressive enhancement” on adactio.com

Relinkification

On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.

URLy warning

I’m genuinely shocked that Jake thinks that Chrome hiding URLs is a good thing. On the one hand, he says:

The URL is the share button of the web, and it does that better than any other platform. Linkability and shareability is key to the web, we must never lose that…

I absolutely agree with him there. But I very much disagree when he says:

…and these changes do not lose that.

The method he describes for getting at a URL to share is this:

clicking the origin chip or hitting ⌘-L.

Your average user is no more likely to figure out how to do that then they are to figure out how to view source (something that Chrome buried as a “developer” feature some time ago).

Cennydd recently said of URLs:

I mostly agree with him. The protocol portion of the URL is pretty pointless, and the domain name and TLD are never what I would describe as “beautiful”. No, when I talk about beautiful URLs, I mean the path that comes after the protocol, domain name, and TLD gumpf …the very bit that Chrome is looking to hide.

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.

URLs are for humans. Design them for humans.

Of course your average user probably won’t even know what a URL is, and nor should they. But they know what a link is. They know that, until now, they could copy the “link” from the top of their browser and paste it into an email, or a text message, or a word processing document.

If this Chrome experiment goes forward, we can kiss all that goodbye.

The security issue that Jake outlines is that browsers need to make the domain name portion of the URL clearly visible. I hope that the smart folks working on Chrome can figure out a way to do that without castrating the browser’s ability to easily share links.

It’s a classic case of:

  1. Something must be done!
  2. This (killing URLs) is something.
  3. Something has been done.

Technically, obfuscating the URL seems to solve the security issue. But technically, decapitation seems to solve a headache.

Fragmentions

Cennydd’s latest piece in A List Apart is the beautifully written Letter to a Junior Designer.

I really like the way that Cennydd emphasises the importance of being able to explain the reasoning behind your design decisions:

If you haven’t already, sometime in your career you’ll meet an awkward sonofabitch who wants to know why every pixel is where you put it. You should be able to articulate an answer for that person—yes, for every pixel.

That reminds me of something I read fourteen(!) years ago that’s always stayed with me. In an interview in Digital Web magazine, Joshua Davis was asked “What would you say is beauty in design?” His answer:

Being able to justify every pixel.

Here’s a link to the direct quote …except that link probably won’t work for you. Not unless you’ve installed this Chrome extension.

What the hell am I talking about? Well, this is something that Kevin Marks has been working on following on from the recent W3C annotation workshop.

It’s called fragmentions and it builds on the work done by Eric and Simon. They proposed using CSS selectors as fragment identifiers. Kevin’s idea is to use the words within the text as anchor points (like an automatic Command+F):

To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:

http://epeus.blogspot.com/2003_02_01_archive.html##annotate+the+web

That link will work in your browser because of this script, which Kevin has added to his site. I may well add that script to this site too.

Fragmentions are a nice idea and—to bring it back to Cennydd’s point—nicely explained.

When is a link not a link?

Google has a web page for its Chrome browser. This page provides information about the browser, but its primary purpose—its call-to-action, if you will—is to encourage you to download the browser. Hence the nice big blue button-like link that says “Download Chrome.”

Tech bloggy publication thingy The Next Web posted some words pointing out that, for a while there, the link wasn’t working. At all. There was no way to download Chrome from the page created for the purpose of letting you download Chrome.

Download Chrome

The problem was that the link isn’t a real link. I mean, technically it’s an A element, and it does have an href attribute …but the value of that attribute isn’t a resource (like say, an installer for a web browser, or terms of service for downloading a browser). Instead it uses the JavaScript pseudo-protocol—meaning: not actually a protocol—to point to void(0).

<a class="button eula-download-button" data-g-event="cta" data-g-label="download-chrome" href="javascript:void(0)">Download Chrome</a>

So when there was a problem with the JavaScript, the link stopped working:

Uncaught TypeError: Cannot read property 'Installer' of undefined

HTML has a very fault-tolerant way of handling errors: if it sees an element or attribute it doesn’t understand, it just ignores it—it doesn’t break the page, it just moves on to then next element. Likewise with CSS. Unknown selectors, properties, or values are simply ignored. Not so with JavaScript. A syntax error stops execution of the script. That’s actually quite handy when you’re trying to debug your code, but no so handy when it’s out on the web.

Given the brittleness of JavaScript’s error-handling, it seems unwise to entrust the core functionality of your page/app/site/whatever to the most fragile part of the front-end stack …especially when that same functionality is provided by a native HTML element.

I don’t want to pick on Google in particular here—there are far too many other sites exhibiting the same kind of over-engineering:

<a href="javascript:void(0)">

<a href="#" onclick="...">

<span class="button" onclick="...">

<div class="link" onclick="...">

By all means add all the JavaScript whizzbangery to your site that you want. But please make sure you’re adding it on a solid base of working markup. Progressive enhancement is your friend. Just like any good friend, it will help you out when unexpected bad things happen.

My links, my links (my lovely lady links)

Thank you for reading my journal here at adactio.com. I appreciate your kind attention.

I feel should point out that if you’re only reading my journal (or “blog” or “weblog” or whatever the kids call it) then you’re missing out on some good stuff over in the links section.

Just so you know, there are multiple RSS feeds you can subscribe to:

Now it might be that you’re already subscribed to an RSS feed of my links through Delicious. Whenever I post a link to my own site, it automatically gets posted to Delicious too.

Or at least it did.

Despite the assurances from the new overlords of Delicious, the API appears to be kaput. That means my links and my Delicious profile are now out of sync. The canonical source for my links is right here on my own site so if you’re currently subscribed to my Delicious RSS feed, I recommend that you update your RSS reader to point at the RSS feed for my links instead.

By the way, if you don’t want to subscribe to the firehose of all my links, you can subscribe to a specific tag instead. For example, here’s everything tagged with “futurefriendly”:

/links/tags/futurefriendly

And here’s the corresponding RSS feed:

/links/tags/futurefriendly/rss

So feel free to explore the links section and do some URL hacking.

All Our Yesterdays: the links

If you were at An Event Apart in Boston and you want to follow up on some of the things I mentioned in my talk, here are some links:

Here are some related posts of my own:

More recently, Nora Young interviewed Jason Scott on online video and digital heritage.

Full Interview: Jason Scott on online video and digital heritage | Spark | CBC Radio on Huffduffer

Jared Spool: The Secret Lives of Links

The final speaker of the first day of An Event Apart in Boston is Jared Spool. Now, when Jared gives a talk …well, you really have to be there. So I don’t know how well liveblogging is going to work but here goes anyway.

The talk is called The Secret Lives of Links. He starts by talking about one of the pre-eminent young scientists in the USA: Lisa Simpson. One day, she lost a tooth, put it in a bowl and when she later examined it under a microscope, she discovered a civilisation going about its business, all the citizens with their secret lives.

The web is like that.

Right before the threatened government shutdown, Jared was looking at news sites and how they were updating their links. Jared suggests that CNN redesign its site to simply have this list of links:

  1. The most important story.
  2. The second most important story.
  3. The third most important story.
  4. An unimportant, yet entertaining story.
  5. The Charlie Sheen story.

But of course it doesn’t work like that. The content of the links tells the importance. Links secretly live to drive the user to their content.

Compare the old CNN design to the current one. The visual design is different but the underlying essence is the same. The links work the same way.

All the news sites were reporting the imminent government shutdown with links that had different text but were all doing the same thing.

Jared has been working on the web since 1995. That whole time, he’s been watching users use websites. The pattern he has seen is that the content speaks to the user through the links. Everything hinges on the links. They provide the scent of information.

This goes back to a theory at Xerox PARC: if you modelled user behaviour when searching for information, it’s very much like a fox sniffing a trail. The users are informavores.

We can see this in educational websites. The designs may change but links are the constant.

http://xkcd.com/773/

We’ve all felt the pain of battling the site owner who wants to prioritise content that the users aren’t that interested in.

The Walgreens site is an interesting example. One fifth of the visitors follow the “photo” link. 16% go to search. The third most important link is about refilling prescriptions. The fourth is the pharmacy link. The fifth most used links is finding the physical stores. Those five links add up to 59% of the total traffic …but those links take up just 3.8% of the page.

This violates Fitts’s Law:

The speed that a user can acquire a target is proportional to the size of the target and indirectly proportional to the distance from the target.

Basically, the bigger and closer, the easier to hit. The Walgreens site violates that. Now, it would look ugly if the “photo” link was one fifth of the whole page, but the point remains: there’s a lot of stuff being foisted on the user by the business.

Another example of Fitts’s Law are those annoying giant interstitial ads that have tiny “close” links.

Deliver users to their desired objective. Give them links that communicate scent in a meaningful way. Make the real estate reflect the user’s desires.

Let’s go back to an educational web site: Ohio State. People come to websites for all sorts of reasons. Most people don’t just go to a website just to see how it looks (except for us). People go to the Ohio State website to get information about grades and schedules. The text of these links are called trigger words: the trigger an action from the user. When done correctly, trigger words lead the user to their desired goal.

It’s hard to know when your information scent is good, but it’s easy to know when your information scent is bad. User behaviour will let you know: using the back button, pogo-sticking, and using search.

Jared has seen the same patterns across hundreds of sites that he’s watched people using. They could take all the clickstreams that succeeded and all the clickstreams that failed. For 15 years there’s a consistent 58% failure rate. That’s quite shocking.

One pattern that emerges in the failed clickstreams is the presence of the back button. If a user hits the back button, the failure rate of those clickstreams rises to above 80%. If a user hits the back button twice, the failure rate rises to 98%.

The back button is the button of doom.

The user clicks the back button when they run out of scent, just like a fox circling back. But foxes succeed ‘cause rabbits are stupid and they go back to where they live and eat, so the fox can go back there and wait. Users hit the back button hoping that the page will somehow have changed when they get back.

Pay attention to the back button. The user is telling you they’ve lost the scent.

Another behaviour is pogo-sticking, hopping back and forward from a “gallery” page with a list of links to the linked pages. Pogo-sticking results in a failure rate of 89%. There’s a myth with e-commerce sites that users want to pogo-stick between product pages to compare product pages but it’s not true: the more a user pogo-sticks, the less likely they are to find what they want and make a purchase.

Users scan a page looking for trigger words. If they find a trigger word, they click on it but if they don’t find it, they go to search. That’s the way it works on 99% of sites, although Amazon is an exception. That’s because Amazon has done a great job of training users to know that absolutely nothing on the home page is of any use.

Some sites try to imitate Google and just have a search box. Don’t to that.

A more accurate name for the search box would be B.Y.O.L.: Bring Your Own Link. What do people type into this box: trigger words!

Pro tip: your search logs are completely filled with trigger words. Have you looked there lately? Your users are telling you what your trigger words should be. If you’re tracking where searches come from, you even know on what pages you should be putting those trigger words.

The key thing to understand is that people don’t want to search. There’s a myth that some people prefer to search. It’s the design of the site that forces them to search. The failure rate for search is 70%.

Jared imagines an experiment called the 7-11 milk experiment. Imagine that someone has run out of milk. We take them to the nearest 7-11. We give them the cash to buy milk. There should be a 100% milk-purchasing result.

That’s what Jared does with websites. He gives people the cash to buy a product, brings them to the website and asks them to purchase the product. Ideally you should see a 100% spending rate. But the best performing site—The Gap—got a 66% spending. The worst site got 6%.

The top variables that contributed to this pattern are: the ratio of number of pages to purchase. Purchases were made at Gap.com in 11.9 pages. On the worst performers, the ratio was 51 pages per purchase. You know what patterns they saw in the worst performers: back button usage, pogo-sticking and search.

Give users information they want. Pages that we would describe as “cluttered” don’t appear that way to a user if the content is what the user wants. Clutter is a relative term based on how much you are interested in the content.

It’s hard to show you good examples of information scent because you’re not the user looking for something specific. Good design is invisible. You don’t notice air conditioning when it’s set just right, only when it’s too hot or too cold. We don’t notice good design.

Links secretly live to look good …while still looking like links. There was a time when the prevailing belief was that links are supposed to be blue and underlined. We couldn’t have made a worse choice. Who decided that? Not designers. Astrophysicists at CERN decided. As it turns, blue is the hardest colour to perceive. Men start to lose the ability to perceive blue at 40. Women start to lose the ability at 55 …because they’re better. Underlines change the geometry of a word, slowing down reading speed.

Thankfully we’ve moved on and we can have “links of colour.” But sometimes we take it far, like the LA Times, where it’s hard to figure out what is and isn’t a link. Users have to wave their mouse around on the page hoping that the browser will give them the finger.

Have a consistent vocabulary. Try to make it clear which links leads to a different page and which links perform on action on the current page.

We confuse users with things that look like links, but aren’t.

Links secretly live to do what the user expects.

Place your links wisely. Don’t put links to related articles in the middle of an article that someone is reading.

Don’t use mystery meat navigation. Users don’t move their mouse until they know what they’re going to click on so don’t hide links behind a mouseover: by the time those links are revealed, it’s too late: users have already made a decision on what they’re going to click. Flyout menus are the worst.

Some of Jared’s favourite links are “Stuff our lawyers made us put here”, “Fewer choices” and “Everything else.”

In summary, this is what links secretly want to do:

  • Deliver users to their desired objective.
  • Emit the right scent.
  • Look good, while still looking like a link.
  • Do what the user expects.

Home-grown and Delicious

I’ve been using Delicious since 2005—back when it was del.icio.us. I have over 2,000 bookmarks stored there. I moved to Magnolia for a while but we all know how that ended.

Back then I wrote:

Really, I should be keeping my links here on adactio.com, maybe pinging Delicious or some other social bookmarking site as a back-up.

Recently Delicious updated its bookmarklet-conjured interface, not for the better. I thought that I could get used to the changes, but I found them getting more annoying over time. Once again, I began to toy with the idea of self-hosting my bookmarks. I even exported all my data into a big XML file.

The very next day, some of Yahoo’s shit hit the web’s fan. Delicious, it was revealed, was to be sunsetted. As someone who doesn’t randomly choose to use meteorological phenomena as verbs, I didn’t know what that meant, but it didn’t sound good.

As the twittersphere erupted in anger and indignation, I was able to share my recently-acquired knowledge:

curl https://{your username}:{your password}@api.del.icio.us/v1/posts/all to get an XML file of your Delicious bookmarks.

A lot of people immediately migrated to Pinboard, which looks like an excellent service (and happens to be the work of Maciej Ceglowski, one of the best bloggers ever to put pixels to screen).

After all that, it turns out that “sunsetting” doesn’t mean “shooting in the head”, it means something more like “flogging off”, as clarified on the Delicious blog. But the damage had been done and, anyway, I had already made up my mind to bring my bookmarks in-house, so I began a fun weekend of hacking.

Setting up a new section of the site for links and importing my Delicious bookmarks was pretty straightforward. Creating a bookmarklet was pretty easy too—I already some experience of that with Huffduffer.

So now I’ll do my bookmarking right here on my own site. All’s well that ends well, right?

Well, not quite. Dom sounded a note of concern:

sigh. There goes the one thing I actually used delicious for, the social network. :(

Paul also pointed to the social aspect as the reason why he’s sticking with Delicious:

Personally, while I’ve always valued the site for its ability to store stuff, what’s always made Delicious most useful to me is its network pages in general, and mine in particular.

But it’s possible to have your Delicious cake and eat it at home. The Delicious API makes it quite easy to post links so I’ve added that into my own bookmarking code. Whenever I post a link here, it will also show up on my Delicious account. If you’re subscribed to my Delicious links, you should notice no change whatsoever.

This is exactly what Steven Pemberton was talking about when I liveblogged his XTech talk two years ago. Another Stephen, the good Mr. Hay, summed up the absurdity of the usual situation:

For a while we’ve posted our data all over the internet on all types of services. These services provide APIs so we can access the data we put into them, so that we can do things with that data. Read that again.

Now I’m hosting the canonical copies of my bookmarks, much like Tantek hosts the canonical copies of his tweets and syndicates them out to Twitter. Delicious gets to have my links as well, and I get to use Delicious as a tool for interacting with my data …only now I’m not limited to just what Delicious can offer me.

Once I had my new links section up and running, I started playing around with the Embedly API (I recently added the excellent oEmbed format to Huffduffer and I was impressed with its power). Whenever I bookmark a page with oEmbed support, I can pull content directly into my site. Take a look at the links I’ve tagged with “sci-fi” to see some examples of embedded Vimeo and Flickr content.

I definitely prefer this self-hosting-with-syndication way of doing things. I can use a service like Delicious without worrying about it going tits-up and taking all my data with it. The real challenge is going to be figuring out a way of applying that model to Twitter and Flickr. I’m curious to see which milestone I’ll hit first: 10,000 tweets or 10,000 photos. Either way, that’s a lot of my content on somebody else’s servers.

Revving up

I was away in Berlin for a few days, delivering a to the good people at Aperto. I had a good time, made even better by some excellent Spring weather and the opportunity to meet up with Anthony and Colin while I was there.

I came home to find that, in my absence, rev="canonical" usage has gone stratospheric. First off, there are the personal sites like CollyLogic and Bokardo. Then there are the bigger fish:

Excellent! I’d just like to add one piece of advice to anyone implementing or thinking of implementing rev="canonical": if you are visibly linking to the short url of the current page, please remember to use rev="canonical" on that A element as well as on any LINK element you’ve put in the HEAD of your document. Likewise, for the coders out there, if you are thinking of implementing a rev="canonical" parser—and let’s face it, that’s a nice piece of low-hanging fruit to hack together—please remember to also check for rev attributes on A elements as well as on LINK elements. If anything, I would prioritise human-visible claims of canonicity over invisible metacrap.

Actually, there’s a whole bunch of nice metacrapital things you can do with your visible hyperlinks. If you link to an RSS feed in the BODY of your document, use the same rel values that you would use if you linked to the feed from a LINK element in the HEAD. If you link to an MP3 file, use the type attribute to specify the right mime-type (audio/mpeg). The same goes for linking to Word documents, PDFs and any other documents that aren’t served up with a mime-type of text/html. So, for example, here on my site, when I link to the RSS feed from the sidebar, I’m using type and rel attributes: href="/journal/rss" rel="alternate" type="application/rss+xml". I’m also quite partial to the hreflang attribute but I don’t get the chance to use that very often—this post being an exception.

The rev="canonical" convention makes a nice addition to the stable of nice semantic richness that can be added to particular flavours of hyperlinks. But it isn’t without its critics. The main thrust of the argument against this usage is that the rev attribute currently doesn’t appear in the HTML5 spec. I’ve even seen people use the past tense to refer to an as-yet unfinished specification: the rev attribute was taken out of the HTML5 spec.

As is so often the case with HTML5, the entire justification for dropping rev seems to be based on a decision made by one person. To be fair, the decision was based on available data from 2005. In light of recent activity and the sheer number of documents that are now using rev="canonical"—Flickr alone accounts for millions—I would hope that the HTML5 community will have the good sense to re-evaluate that decision. The document outlining the design principles of HTML5 states:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

The unbelievable speed of adoption of rev="canonical" shows that it fulfils a real need. If the HTML5 community ignore this development, not only would they not be paving a cowpath, they would be refusing to even acknowledge that a well-trodden cowpath even exists.

The argument against rev seems to be that it can be confusing and could result in people using it incorrectly. By that argument, new elements like header and footer should be kept out of any future specification for the same reason. I’ve already come across confusion on the part of authors who thought that these new elements could only be used once per document. Fortunately, the spec explains their meaning.

The whole point of having a spec is to explain the meaning of elements and attributes, be it for authors or user-agents. Without a spec to explain what they mean, elements like P and A don’t make any intuitive sense. It’s no different for attributes like href or rev. To say that rev isn’t a good attribute because it requires you to read the spec is like saying that in order to write English, you need to understand the language. It’s neither a good nor bad thing, it’s just a statement of the bleedin’ obvious.

Now go grab yourself the very handy bookmarklet that Simon has written for auto-discovering short urls.

Do the right semantic thing

Jason Kottke wrote about a new site on the block called Do The Right Thing:

The site works on a modified Digg model. If you see a story you like, you click a button to declare your interest in it. But then you also rate the social impact of the subject of the story, either positive or negative.

As soon as I read this, I immediately thought of vote-links. I wrote about vote-links before in an article for 24 Ways.

Do The Right Thing is already linking to other sites with “impact” ratings shown next each link. Depending on how people have voted for the social impact of the linked resource, this rating is either positive or negative. With the addition of rev="vote-for" or rev="vote-against" the community judgement could be explicitly encoded in the links.

I signed up for Do The Right Thing so that I could use the members-only feedback form to suggest this addition. Alas, the overly clever feedback form couldn’t be submitted in Camino, my current browser of choice.

Update: The feedback form has been fixed. Not only that but the guy doing the fixing turns out to be Jarkko Laine, who I once had dinner with in Copenhagen. Small world.

Spoken

The deed is done. I had the pre-lunchtime slot at Reboot to speak about a very simple subject: the hyperlink.

It was fun. People seemed to enjoy it and there were some great questions and comments afterwards: it was humbling and gratifying to have Håkon Wium Lie and Jean-Francois Groff respond to my words.

Unlike any previous presentations I’ve done, I had written out everything I wanted to say word for word. I began by describing this as a story, a manifesto, but mostly a love letter. For once, I was going to read a pre-prepared speech. I still had slides but they were very minimal.

I ended up using two laptops. One iBook, controlled from my phone using Salling Clicker, was displaying the slides done in Keynote. I used the other iBook as a teleprompter: I wanted large sized text continually scrolling as I spoke.

I looked into some autocue software for the Mac but rather than fork out the cash for one of them, I wrote my own little app using XHTML, CSS and JavaScript. I bashed out a quick’n’dirty first version pretty quickly. I spent most of the flight to Copenhagen refining the JavaScript to make it reasonably nice. I’ll post the code up somewhere, probably over on the DOM Scripting site in case anyone else needs a browser-based teleprompter.

If you’d like to read a regular, non-scrolling version of my love letter, I’ve posted In Praise of the Hyperlink in the articles section.

Copenhagen

I’ve been seeing the inside of a lot of airports lately. Right after getting back from XTech in Amsterdam, I flew up to Manchester to deliver a one day workshop in Ajax.

It was my first visit to the mighty Mancunian metropolis and a very pleasant visit it was, especially given the opportunity to go drinking with Patrick Lauke, James “Brothercake” Edwards, and Chris Mills in a bar that was decked out like a sci-fi version of the Hard Rock Café from parallel grungy dimension.

Tomorrow I will once again be doing the airport shuffle. This time the airport is Stansted and the destination is Copenhagen, the setting for the eighth iteration of the Reboot conference. I’ve never been to Denmark, let alone Reboot, before. I’m really looking forward to it.

I will be speaking but for once it won’t be a code-filled techy presentation. Instead, I plan to deliver the most pretentious talk ever devised: In Praise of the Hyperlink.

I also managed to solve the mystery of the missing email and figured out that the person doing the pre-Reboot podcast was Nicole Simon. We had a little chat over Skype and you can listen to the conversation if you want to get a taste of what I’ll be talking about.

If you’re going to Reboot, I’ll see you there. If not, expect the usual cascade of Flickr pics and liveblogging.