Tags: rss



Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

Posting to my site

I was idly thinking about the different ways I can post to adactio.com. I decided to count the ways.

Admin interface

This is the classic CMS approach. In my case the CMS is a crufty hand-rolled affair using PHP and MySQL that I wrote years ago. I log in to an admin interface and fill in a form, putting the text of my posts into a textarea. In truth, I usually write in a desktop text editor first, and then paste that into the textarea. That’s what I’m doing now—copying and pasting Markdown from the Typed app.

Directly from my site

If I’m logged in, I get a stripped down posting interface in the notes section of my site.

Notes posting interface


This is how I post links. When I’m at a URL I want to bookmark, I hit the “Bookmark it” bookmarklet in my browser’s bookmarks bar. That pops open a version of the admin interface tailored specifically for links. I really, really like bookmarklets. The one big downside is that they don’t work on mobile.

Text message

This is something I knocked together at Indie Web Camp Brighton 2015 using the Twilio API. It’s handy for posting notes if I’m travelling somewhere and data is at a premium. But I don’t use it that often.


Thanks to Aaron’s OwnYourGram service—and the fact that my site has a micropub endpoint—I can post images from Instagram to my site. This used to happen instantaneously but Instagram changed their API rules for the worse. Between that and their shitty “algorithmic” timeline, I find myself using the service less and less. At this point I’m only on their for the doggos.


Like OwnYourGram, Aaron’s OwnYourSwarm allows me to post check-ins and photos from the Swarm app to my site. Again, micropub makes it all possible.

OwnYourGram and OwnYourSwarm are very similar and could probably be abstracted into a generic service for posting from third-party apps to micropub endpoints. I’d quite like to post my check-ins on Untappd to my site.

Other people’s admin interfaces

Thanks to rel="me" and IndieAuth, I can log into other people’s posting interfaces using my own website as the log-in, and post to my micropub endpoint, like this. Quill is a good example of this. I don’t use it that much, but I really should—the editor interface is quite Medium-like in its design.

Anyway, those are the different ways I can update my website that I can think of right now.


In terms of output, I’ve got a few different ways of syndicating what I post here:

Just so you know, if you comment on one of my posts on Facebook, I probably won’t see it. But if you reply to a copy of one of posts on Twitter or Instagram, it will show up over here on adactio.com thanks to the magic of Brid.gy and webmention.

AMPed up

Apple has Apple News. Facebook has Instant Articles. Now Google has AMP: Accelerated Mobile Pages.

The big players sure are going to a lot of effort to reinvent RSS.

That may sound like a flippant remark, but it’s not too far from the truth. In the case of Apple News, its current incarnation appears to be quite literally an RSS reader, at least until the unveiling of the forthcoming Apple News Format.

Google’s AMP project looks a little bit different to the offerings from Facebook and Apple. Rather than creating a proprietary format from scratch, it mandates a subset of HTML …with some proprietary elements thrown in (or, to use the more diplomatic parlance of the extensible web, custom elements).

The idea is that alongside the regular HTML version of your document, you provide a corresponding AMP HTML version. Because the AMP HTML version will be leaner and meaner, user agents can then grab the AMP HTML version and present that to the end user for a faster browsing experience.

So if an RSS feed is an alternate representation of a homepage or a listing of articles, then an AMP document is an alternate representation of a single article.

Now, my own personal take on providing alternate representations of documents is “Sure. Why not?” Here on adactio.com I provide RSS feeds. On The Session I provide RSS, JSON, and XML. And on Huffduffer I provide RSS, Atom, JSON, and XSPF, adding:

If you would like to see another format supported, share your idea.

Also, each individual item on Huffduffer has a corresponding oEmbed version (and, in theory, an RDF version)—an alternate representation of that item …in principle, not that different from AMP. The big difference with AMP is that it’s using HTML (of sorts) for its format.

All of this sounds pretty reasonable: provide an alternate representation of your canonical HTML pages so that user-agents (Twitter, Google, browsers) can render a faster-loading version …much like an RSS reader.

So should you start providing AMP versions of your pages? My initial reaction is “Sure. Why not?”

The AMP Project website comes with a list of frequently asked questions, which of course, nobody has asked. My own list of invented frequently asked questions might look a little different.

Will this kill advertising?

We live in hope.

Alas, AMP pages will still be able to carry advertising, but in a restricted form. No more scripts that track your movement across the web …unless the script is from an authorised provider, like say, Google.

But it looks like the worst performance offenders won’t be able to get their grubby little scripts into AMP pages. This is a good thing.

Won’t this kill journalism?

Of all the horrid myths currently in circulation, the two that piss me off the most are:

  1. Journalism requires advertising to survive.
  2. Advertising requires invasive JavaScript.

Put the two together and you get the gist of most of the chicken-littling articles currently in circulation: “Journalism requires invasive JavaScript to survive.”

I could argue against the first claim, but let’s leave that for another day. Let’s suppose for now that, sure, journalism requires advertising to survive. Fine.

It’s that second point that is fundamentally wrong. The idea that the current state of advertising is the only way of advertising is incredibly short-sighted and misguided. Invasive JavaScript is not a requirement for showing me an ad. Setting a cookie is not a requirement for showing me an ad. Knowing where I live, who my friends are, what my income level is, and where I’ve been on the web …none of these are requirements for showing me an ad.

It is entirely possible to advertise to me and treat me with respect at the same time. The Deck already does this.

And you know what? Ad networks had their chance. They had their chance to treat us with respect with the Do Not Track initiative. We asked them to respect our wishes. They told us get screwed.

Now those same ad providers are crying because we’re installing ad blockers. They can get screwed.


It is entirely possible to advertise within AMP pages …just not using blocking JavaScript.

For a nicely nuanced take on what AMP could mean for journalism, see Joshua Benton’s article on Nieman Lab—Get AMP’d: Here’s what publishers need to know about Google’s new plan to speed up your website.

Why not just make faster web pages?

Excellent question!

For a site like adactio.com, the difference between the regular HTML version of an article and the corresponding AMP version of the same article is pretty small. It’s a shame that I can’t just say “Hey, the current version of the article is the AMP version”, but that would require that I only use a subset of HTML and that I add some required guff to my page (including an unnecessary JavaScript file).

But for most of the news sites out there, the difference between their regular HTML pages and the corresponding AMP versions will be pretty significant. That’s because the regular HTML versions are bloated with third-party scripts, oversized assets, and cruft around the actual content.

Now it is in theory possible for these news sites to get rid of all those things, and I sincerely hope that they will. But that’s a big political struggle. I am rooting for developers—like the good folks at VOX—who have to battle against bosses who honestly think that journalism requires invasive JavaScript. Best of luck.

Along comes Google saying “If you want to play in our sandbox, you’re going to have to abide by our rules.” Those rules include performance best practices (for the most part—I take issue with some of the requirements, and I’ll go into that in more detail in a moment).

Now when the boss says “Slap a three megabyte JavaScript library on it so we can show a carousel”, the developers can only respond with “Google says No.”

When the boss says “Slap a ton of third-party trackers on it so we can monetise those eyeballs”, the developers can only respond with “Google says No.”

Google have used their influence like this before and it has brought them accusations of monopolistic abuse. Some people got very upset when they began labelling (and later ranking) mobile-friendly pages. Personally, I’ve got no issue with that.

In this particular case, Google aren’t mandating what you can and can’t do on your regular HTML pages; only what you can and can’t do on the corresponding AMP page.

Which brings up another question…

Will the AMP web kill the open web?

If we all start creating AMP versions of our pages, and those pages are faster than our regular HTML versions, won’t everyone just see the AMP versions without ever seeing the “full” versions?

Tim articulates a legitimate concern:

This promise of improved distribution for pages using AMP HTML shifts the incentive. AMP isn’t encouraging better performance on the web; AMP is encouraging the use of their specific tool to build a version of a web page. It doesn’t feel like something helping the open web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the web.

That troubles me. Using a very specific tool to build a tailored version of my page in order to “reach everyone” doesn’t fit any definition of the “open web” that I’ve ever heard.

Fair point. But I also remember that a lot of people were upset by RSS. They didn’t like that users could go for months at a time without visiting the actual website, and yet they were reading every article. They were reading every article in non-browser user agents in a format that wasn’t HTML. On paper that sounds like the antithesis of the open web, but in practice there was always something very webby about RSS, and RSS feed readers—it put the power back in the hands of the end users.

Some people chose not to play ball. They only put snippets in their RSS feeds, not the full articles. Maybe some publishers will do the same with the AMP versions of their articles: “To read more, click here…”

But I remember what generally tended to happen to the publishers who refused to put the full content in their RSS feeds. We unsubscribed.

Still, I share the concern that any one company—whether it’s Facebook, Apple, or Google—should wield so much power over how we publish on the web. I don’t think you have to be a conspiracy theorist to view the AMP project as an attempt to replace the existing web with an alternate web, more tightly controlled by Google (albeit a faster, more performant, tightly-controlled web).

My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

I’ve been playing around with the AMP HTML spec. It has some issues. The good news is that it’s open source and the project owners seem receptive to feedback.


No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web. I’m happy to leave them at the door (although weirdly, web fonts—another big performance killer—are allowed in).

But then there’s a bit of an about-face. In order to have a valid AMP HTML page, you must include a piece of third-party JavaScript. In this case, the third party is Google and the JavaScript file is what handles the loading of assets.

This seems a bit strange to me; on the one hand claiming that third-party JavaScript is bad for performance and on the other, requiring some third-party JavaScript. As Justin says:

For me this is loading one thing too many… the AMP JS library. Surely the document itself is going to be faster than loading a library to try and make it load faster.

On the plus side, this third-party JavaScript is loaded asynchronously. It seems to mostly be there to handle the rendering of embedded content: images, videos, audio, etc.

Embedded content

If you want audio, video, or images on your page, you must use propriet… custom elements like amp-audio, amp-video, and amp-img. In the case of images, I can see how this is a way of getting around the browser’s lookahead pre-parser (although responsive images also solve this problem). In the case of audio and video, the standard audio and video elements already come with a way of specifying preloading behaviour using the preload attribute. Very odd.

Justin again:

I’m not sure if this is solving anything at the moment that we’re not already fixing with something like responsive images.

To use amp-img for images within the flow of a document, you’ll need to specify the dimensions of the image. This makes sense from a rendering point of view—knowing the width and height ahead of time avoids repaints and reflows. Alas, in many of the cases here on adactio.com, I don’t know the dimensions of the images I’m including. So any of my AMP HTML pages that include images will be invalid.

Overall, the way that AMP HTML handles embedded content looks like a whole lot of wheel reinvention. I like the idea of providing custom elements as an option for authors. I hate the idea of making them a requirement.


If you want to provide metadata about your document, AMP HTML currently requires the use of Google’s Schema.org vocabulary. This has a big whiff of vendor lock-in to it. I’ve flagged this up as an issue and Aaron is pushing a change so hopefully this will be resolved soon.


In its initial release, the AMP HTML spec came with some nasty surprises for accessibility. The biggest is probably the requirement to include this in your viewport meta element:


Yowzers! That’s some slap in the face to decent web developers everywhere. Fortunately this has been flagged up and I’m hoping it will be fixed soon.

If it doesn’t get fixed, it’s quite a non-starter. It beggars belief that Google would mandate to authors that they must make their pages inaccessible to pinch/zoom. I would hope that many developers would rebel against such a draconian injunction. If that happens, it’ll be interesting to see what becomes of those theoretically badly-formed AMP HTML documents. Technically, they will fail validation, but for very good reason. Will those accessible documents be rejected?

Please get involved on this issue if this is important to you (hint: this should be important to you).

There are a few smaller issues. Initially the :focus pseudo-class was disallowed in author CSS, but that’s being fixed.

Currently AMP HTML documents must have this line:

<style>body {opacity: 0}</style><noscript><style>body {opacity: 1}</style></noscript>


That’s a horrible conflation of JavaScript availability and CSS. It’s being fixed though, and soon all the opacity jiggery-pokery will only happen via JavaScript, which will be a big improvement: it should either all happen in CSS or all happen in JavaScript, but not the current mixture of the two.


The AMP HTML version of your page is not the canonical version. You can specify where the real HTML version of your document is by using rel="canonical". Great!

But how do you link from your canonical page out to the AMP HTML version? Currently you’re supposed to use rel="amphtml". No, they haven’t checked the registry. Again. I’ll go in and add it.

In the meantime, I’m also requesting that the amphtml value can be combined with the alternate value, seeing as rel values can be space separated:

rel="alternate amphtml" type="text/html"

See? Not that different to RSS:

rel="alterate" type="application/rss+xml"


When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

From that perspective, providing AMP HTML pages feels like just one more syndication option. If it were the only option, and I felt compelled to provide AMP versions of my content, I’d be very concerned. But for now, I’ll give it a whirl and see how it goes.

Here’s a bit of PHP I’m using to convert a regular piece of HTML into AMP HTML—it’s horrible code; it uses regular expressions on HTML which, as we all know, will summon the Elder Gods.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Returning control

In his tap essay Fish, Robin sloan said:

On the internet today, reading something twice is an act of love.

I’ve found a few services recently that encourage me to return to things I’ve already read.

Findings is looking quite lovely since its recent redesign. They may have screwed up with their email notification anti-pattern but they were quick to own up to the problem. I’ve been taking the time to read back through quotations I’ve posted, which in turn leads me to revisit the original pieces that the quotations were taken from.

Take, for example, this quote from Dave Winer:

We need to break out of the model where all these systems are monolithic and standalone. There’s art in each individual system, but there’s a much greater art in the union of all the systems we create.

…which leads me back to the beautifully-worded piece he wrote on Medium.

At the other end of the scale, reading this quote led me to revisit Rob’s review of Not Of This Earth on NotComing.com:

Not of This Earth is an early example of a premise conceivably determined by the proverbial writer’s room dartboard. In this case, the first two darts landed on “space” and “vampire.” There was no need to throw a third.

Although I think perhaps my favourite movie-related quotation comes from Gavin Rothery’s review of Saturn 3:

You could look at this film superficially and see it as a robot gone mental chasing Farrah Fawcett around a moonbase trying to get it on with her and killing everybody that gets in its way. Or, you could see through that into brilliance of this film and see that is in fact a story about a robot gone mental chasing Farrah Fawcett around a moonbase trying to get it on with her and killing everybody that gets in its way.

The other service that is encouraging me to revisit articles that I’ve previously read is Readlists. I’ve been using it to gather together pieces of writing that I’ve previously linked to about the Internet of Things, the infrastructure of the internet, digital preservation, or simply sci-fi short stories.

Frank mentioned Readlists when he wrote about The Anthologists:

Anthologies have the potential to finally make good on the purpose of all our automated archiving and collecting: that we would actually go back to the library, look at the stuff again, and, holy moses, do something with it. A collection that isn’t revisited might as well be a garbage heap.

I really like the fact that while Readlists is very much a tool that relies on the network, the collected content no longer requires a network connection: you can send a group of articles to your Kindle, or download them as one epub file.

I love tools like this—user style sheets, greasemonkey scripts, Readability, Instapaper, bookmarklets of all kinds—that allow the end user to exercise control over the content they want to revisit. Or, as Frank puts it:

…users gain new ways to select, sequence, recontextualize, and publish the content they consume.

I think the first technology that really brought this notion to the fore was RSS. The idea that the reader could choose not only to read your content at a later time, but also to read it in a different place—their RSS reader rather than your website—seemed quite radical. It was a bitter pill for the old guard to swallow, but once publishing RSS feeds became the norm, even the stodgiest of old media producers learned to let go of the illusion of control.

That’s why I was very surprised when Aral pushed back against RSS. I understand his reasoning for not providing a full RSS feed:

every RSS reader I tested it in displayed the articles differently — completely destroying my line widths, pull quotes, image captions, footers, and the layout of the high‐DPI images I was using.

…but that kind of illusory control just seems antithetical to the way the web works.

The heart of the issue, I think, is when Aral talks about:

the author’s moral rights over the form and presentation of their work.

I understand his point, but I also value the reader’s ideas about the form and presentation of the work they are going to be reading. The attempt to constrain and restrict the reader’s recontextualising reminds me of emails I used to read on Steve Champeon’s Webdesign-L mailing list back in the 90s that would begin:

How can I force the user to …?


How do I stop the user from …?

The questions usually involved attempts to stop users “getting at” images or viewing the markup source. Again, I understand where those views come from, but they just don’t fit comfortably with the sprit of the web.

And, of course, the truth was always that once something was out there on the web, users could always find a way to read it, alter it, store it, or revisit it. For Aral’s site, for example, although he refuses to provide a full RSS feed, all I have to is use Reeder with its built-in Readability functionality to get the full content.

Breaking Things

This is an important point: attempting to exert too much control will be interpreted as damage and routed around. That’s exactly why RSS exists. That’s why Readability and Instapaper exist. That’s why Findings and Readlists exist. Heck, it’s why Huffduffer exists.

To paraphrase Princess Leia, the more you tighten your grip, the more content will slip through your fingers. Rather than trying to battle against the tide, go with the flow and embrace the reality of what Cameron Koczon calls Orbital Content and what Sara Wachter-Boettcher calls Future-Ready Content.

Both of those articles were published on A List Apart. But feel free to put them into a Readlist, or quote your favourite bits on Findings. And then, later, maybe you’ll return to them. Maybe you’ll read them twice. Maybe you’ll love them.

My links, my links (my lovely lady links)

Thank you for reading my journal here at adactio.com. I appreciate your kind attention.

I feel should point out that if you’re only reading my journal (or “blog” or “weblog” or whatever the kids call it) then you’re missing out on some good stuff over in the links section.

Just so you know, there are multiple RSS feeds you can subscribe to:

Now it might be that you’re already subscribed to an RSS feed of my links through Delicious. Whenever I post a link to my own site, it automatically gets posted to Delicious too.

Or at least it did.

Despite the assurances from the new overlords of Delicious, the API appears to be kaput. That means my links and my Delicious profile are now out of sync. The canonical source for my links is right here on my own site so if you’re currently subscribed to my Delicious RSS feed, I recommend that you update your RSS reader to point at the RSS feed for my links instead.

By the way, if you don’t want to subscribe to the firehose of all my links, you can subscribe to a specific tag instead. For example, here’s everything tagged with “futurefriendly”:


And here’s the corresponding RSS feed:


So feel free to explore the links section and do some URL hacking.


In the preface to my book DOM Scripting, the first of my acknowledgments is a thank you to View Source. Thanks to that one little piece of browser functionality, I was able to learn HTML, CSS and JavaScript.

In these days of RESTful APIs, there are even more sources to be viewed. Whilst deconstructing a message from the oracle of Fielding, Paul gives some straightforward advice on being true to the ideals of , including this:

Above all, don’t kill the bookmarking experience and testing with bog-standard, service-ignorant browsers.

Replace the word “testing” with “viewing source” and that single sentence encapsulates the baseline support I expect from a web browser.

In recent years, the bookmarking aspect has been suffering not through any fault of the browsers but because of overzealous use of Ajax and through the actions of developers using POST when they should be using GET.

Equally worrying, I’ve noticed that the second piece of functionality—viewing source—is also under threat in some circumstances. Here the problem lies with the web browser, specifically Safari. Entering the URL for an RSS file, or following a hypertext reference to an RSS file, will not display the contents of that resource. Instead, Safari attempts to be “smart” and reformats the resource into a nicely presented document.

Now, I understand the reasoning for this. Most people don’t want to be confronted with a page of XML elements. But the problem with Safari’s implementation is that it breaks its own View Source functionality. Viewing source on a reformatted RSS feed in Safari will display the HTML used to present the feed, not the feed itself. Firefox 3 offers a better compromise. Like Safari, it reformats RSS feeds into a readable presentation in the browser. But crucially, if you view source, you will see the original RSS …the source.

I’ll leave you with some writings on the importance of View Source through the ages:

App Engines of Creation

At last night’s £5 App gathering, after Glenn entertained us with the story of setting up Madgex, Paul and Simon unveiled a little thing they’ve been working on called WalRSS. In a nutshell, you point it at a URL and it makes a nice iPhone/iPod Touch version by styling the associated RSS feed.

Simon talked about all the headaches involved in such a seemingly simple concept. All the problems boiled down to the fact that the app needs to consume/parse/scrape third-party content. It turns out that consuming/parsing/scraping HTML and RSS is an order of magnitude hairier and scarier than pointing at a nice shiny RESTful API. In a textbook example of , Simon needed to be ultra-paranoid about malicious users potentially taking down his server while being overly-generous in the kind of malformed, invalid RSS/Atom he accepted because, as it turns out, a helluva lot of feeds out there are bozo-compliant. With all sorts of clever server-side solutions at his disposal to handle polling, load balancing, caching and message queueing, he quickly came to realise that he was becoming more of a sysadmin than a web developer.

Ironically, just the day before the £5 App meetup, Google announced their App Engine. WalRSS is almost exactly the kind of app that the App Engine is designed for: it’s written in Django and it needs to do the kind of processing that Google’s infrastructure was made to handle.

I for one welcome our new App Engine overlords. I quite like to dabble in the occasional bit of backend coding but I have no desire to delve into the domain of systems administration.

There’s already an OpenID provider built on Google App Engine. This means that anybody with a Google account potentially has an OpenID URL. You’d have to log in through the app first but from then on, you could use http://openid-provider.appspot.com/[your username] as your login.

Right now I’m using http://adactio.com/ as my OpenID URL, delegating to http://adactio.myopenid.com/:

<link rel="openid.delegate" href="http://adactio.myopenid.com/" />

Should I ever tire of MyOpenID, I guess I potentially update my delegate link to use Google:

<link rel="openid.delegate" href="http://openid-provider.appspot.com/adactio" />

I’d still need to update my openid.server link though.

Oh, you think this is geeky stuff? You should have heard Simon last night. Actually, you could have if you tuned into Ribot’s live broadcast on Qik.

Feed reading

There are some great pieces of software out there dedicated to reading RSS, each with their own take on the task. As a Mac user, I’m spoiled for choice with NetNewsWire, NewsFire and Shrook to choose from.

Then there are the myriad online feed readers like Bloglines, Newshutch and Google Reader. They’re all pretty slick as long as you’ve got a relatively well-specced machine with a JavaScript-capable browser.

But I’ve never found an RSS reader that I’ve been completely satisfied with. I find all too often that the experience reminds of using an email client. Reading email can be an enjoyable activity but more often than not, it’s all about getting the unread count in your inbox down to zero, right?

I gave up on feed readers for quite a while and just started reading the few feeds I’ve gathered together at Adactio Elsewhere. But this doesn’t keep track of what I’ve already read.

I quite like the way the “favorites”(sic) feature on Technorati works. Here, the freshness of the post takes precedent over the author. Everyone’s posts are mixed up into one river of news. It feels more like reading through a single blog.

I didn’t think there was any new way to catch up on RSS feeds until James set me straight.

We have weekly Monday morning meetings at Clearleft at which everyone is encouraged to offer up a “lightbulb moment”—an insight or revelation that’s useful or just cool. This week’s meeting didn’t happen until Wednesday (we’re not the best clockwatchers). For his lightbulb moment, James pointed out a nice little feature now offered by Google Reader.

If you go to Settings and then look under the Goodies tab, you’ll see a bookmarklet marked “Next” that you can drag to your bookmarks bar. Clicking on this bookmarklet (or favelet, if you will) takes you to the next unread post in your river of news.

I really like this way of reading. Like the Technorati solution, the order is determined by date rather than author. But the authorship is very relevant in that you view the entry in its original context; on that person’s website rather than in a feed reader (something that I know a lot of my designer friends have strong feelings about).

This little bookmarklet manages to bring RSS reading back into the browser in a completely different way than simply emulating a desktop feed reader. Whenever I want to read something new, I click the “Next” link in my bookmarks bar. I don’t know exactly what I’m going to get but I know that it will something that I haven’t read before, it will be written by someone I enjoy reading and it will be fresh.

This solution manages to straddle the fine line between the convenience of RSS (pull rather than push notification) with the tyranny of RSS (a daunting list of feeds to read through). And I don’t need to keep opening new tabs or windows—something that’s hard to avoid with regular feed readers be they on the desktop or in the browser.

I’m thinking of creating an OPML file that consists of nothing but del.icio.us links (or quicklinks or blogmarks or whatever) from people whose taste I trust. Then clicking on that “Next” button would have a lovely touch of serendipity, constantly finding something new and fresh, landing me on a web page for no other reason than someone I know thought it was worth bookmarking.

It kind of reminds me of what it was like to surf the Web back in the old pre-RSS days before information overload overwhelmed us all. If you’ve got a Google Reader account, give the bookmarklet a whirl and see what you think. It might change the way you think about reading RSS.

Watching the stream

Ever since I hacked up my little life stream experiment and wrote about it, it’s been very gratifying to see how people have taken the idea and run with it. Emily Chang has written about the resources she came across when she was putting her life stream together. Sam Sethi has been talking about life streams as a rich vein of attention data (which reminded me of John Allsopp’s thoughts on why blogging as we know it is over).

Of course this idea of mashing up time-stamped (micro)content—usually through RSS—isn’t anything new. Tom Armitage touched on this during his presentation at Reboot in Copenhagen last year:

Whenever I publish anything with a date attached, there’s a framework for ongoing narrative. The item published is our narrative, but the date gives it ongoingness. It takes time for the pattern to emerge; initially, throwing data at that black box, it seems random. For instance: I upload photos to Flickr at arbitrary intervals. I go silent on my blog without explanation. It may seem, in the short-term, like a blip, but in the long-term, it’s an important part of my story. My blog is full of delicious bookmarks right now because I’ve been busy at work, and writing this talk. That’ll be reflected in the longer game, when I write my post-Reboot blog entry, and suddenly the pattern becomes clear.

If you haven’t yet done so, I strongly urge you to read the rest of Telling Stories — What Homer, Dickens, and Comic Books can teach your (social) software. It’s quite brilliant and discusses many issues that are even more relevant today with the rise of OpenID and the clamour for portable social networks.

Jeff Croft has been pioneering the life stream idea for quite a while now, originally calling it a tumblelog. His implementation uses APIs rather than plain ol’ RSS. He’s right in thinking that APIs are a more robust solution for long-term archiving but I think of my life stream as being a fleeting snapshot of current activity.

As Jeff points out:

The result is that most people’s lifestream looks great for the first several days back, but then get all sparse at the bottom, where only one or two sources are still providing information.

CSS to the rescue. I’ve updated my life stream to give vibrant colours to newer entries and faded, eventually illegible colours to older, less relevant content. It’s kind of like Shaun’s recent experiments with age and colour.

I love APIs but when something as simple as RSS does the job, I’ll go for the simple solution every time (hence my love of microformats). In fact, I see RSS as being a kind of low-level short-term API or, as Rob Purdie put it, the vaseline of Web 2.0.

The ubiquity of RSS is what makes Yahoo Pipes possible. Now anybody can make a life stream by plugging in some RSS feeds into a pipe. Here’s one I made earlier. When I tried to do this a few days ago, I couldn’t get it to sort by date properly: it was sorting the pubDate field alphabetically—that seems to be fixed now.

Using Yahoo Pipes isn’t quite as straightforward as it could be. It still feels kind of techy and intimidating for non-geeks. This is the same problem that Ning used to have. Its services were ostensibly being provided so that non-techy people could start mashing stuff up but the presentation was impenetrably techy. That’s all changed now.

Ning has completely rebranded as a social network builder. Personally, I think this is a brilliant move. After just a few seconds on the front page, it’s absolutely clear what you can do. By providing example sites, they make the point even clearer. You can still make all the same stuff that you always could on Ning—videos, photos, blogs—but now it’s all wrapped up as part of a clearer goal: creating your own social network site.

When Yahoo Pipes launched, it looked like it might be competing directly with Ning. Now that’s not the case. The two services have diverged and are concentrating on different tasks for different audiences.

I’ll be keeping an eye on Ning to see how it deals with the issue of portable social networks. I’ll be watching Yahoo Pipes as a tool for creating life streams.

Streaming my life away

I’ve been playing around with Twitter, a neat little service from the people who brought you Odeo. You send it little text updates via SMS, the website, or Jabber. It’s intended as a piece of social software, but I think it has potential for more selfish uses.

Every time I ping Twitter, the message is time stamped. Every time I post a link to Del.icio.us, that’s time stamped. Every time I upload a picture to Flickr, a time stamp of when the picture was taken is also sent. Whenever I listen to a song on iTunes, the track information is sent to Last.fm with a time stamp. And of course whenever I blog, be it here, at the DOM Scripting blog or Principia Gastronomica, each entry has a permalink and a time stamp.

Just about every time somebody publishes something on the Web, it gets time stamped. Wouldn’t it be nice to pull in all these disparate bits of time stamped information and build up a timeline of online activity?

The technology is already in place. Most of the services I mention above have APIs. In this case, a fully-blown API isn’t even necessary. Each service already offers an easily parsable XML file of activity ordered by time: RSS.

At the recent Take Back The Web event here in Brighton, Rob Purdie talked about RSS being the vaseline that’s greasing the wheels of Web 2.0. He makes a good point.

Over the course of any particular day, I could be updating five or six RSS feeds, depending on how much I’m blogging, how many links I’m posting, or how much music I’m listening to. I’d like to take those individual feeds and mush ‘em all up together.

There are a couple of services out there for mashing up RSS. FeedBurner is probably the most well known, but you are limited to a pre-set choice of RSS feeds that you can mix in. RSS Mix offers a more open-ended splicing service but it seems a bit confused when it comes to date ordering. There’s some other service I was playing around with last week but for the life of me, I can’t remember the name of it. All I remember is that it had an extremely annoying interface full of gratuitous Ajax.

I’ve mocked up my own little life stream, tracking my Twitter, Flickr, Del.icio.us, Last.fm, and blog posts. It’s a quick’n’dirty script that isn’t doing any caching. The important thing is that it’s keeping the context of the permalinks (song, link, photo, or blog post) and displaying them ordered by date and time. What I’d really like to do is display the same information in a more time-based interface: a calendar, or timeline.

Annoyingly, the Last.fm feed of recently listened to tracks disappears if you don’t listen to anything for a while. Grrr…

Update: Here’s the PHP source code.

Taking back the Web

I’m at an event called Take Back The Web. It’s a cosy little unconference aimed at non-profits and activist groups.

There’s been plenty of education and discussion going on all day, mostly around things like blogs, wikis, RSS and podcasting. I followed up the RSS talk with a little spiel about APIs and how they can be used to pull in data from other places on the web.

I’m used to attending geekier events where everyone is fairly tech-savvy, but the crowd here is mostly made up of people on the ground who want to be able use technology but who aren’t necessarily from a technological background. It really brought home to me just how far we have to go in making this stuff less geeky and scary-sounding.

Just about everyone gets blogs, and it’s pretty easy to get started with them. Wikis are a little bit trickier, but still attainable. RSS becomes harder again: it’s still too hard to subscribe, and even the term “subscribe” is itself misleading, implying payment. As for APIs, that’s still all pretty much rocket science so I just gave a basic overview of the benefits without really discussing the nitty-gritty of programming.

Notice how the terms change in complexity along that scale: from the word blog to the term API. We’re using way too many acronyms and technobabble for this stuff. Of course, we can’t change the names without upsetting the geeky programmers.

I got a lot of food for thought from the day so far, even though I already know about the technologies. It’s been fascinating to see how people are using the web now and also how much more they could be doing.

The guys from mySociety/They Work For You are talking through their services now and I’ve just found about this nifty API. I’ll have a play around with that. I’ll quiz Matthew about it later; he’s staying over with me. More grist for the bedroll.

A proper podcast for South by Southwest 2006

The South by SouthWest website erronously lists a series of downloadable audio files as “podcasts”. Hugh, don’t make me come over there and give you a patronising scolding like the one I gave to Ryan.

Confusingly, there is an RSS file available but it doesn’t use enclosures so podcast playing software like iTunes can’t find the audio files.

Jason Landry to the rescue. He’s hand-rolled an RSS file with enclosures pointing to the audio files. Point your podcast subscribing software of choice at the SXSW 2006 Interactive Panels Podcast.