Tags: book



The Rational Optimist

As part of my ongoing obsession with figuring out how we evaluate technology, I finally got around to reading Matt Ridley’s The Rational Optimist. It was an exasperating read.

On the one hand, it’s a history of the progress of human civilisation. Like Steven Pinker’s The Better Angels Of Our Nature, it piles on the data demonstrating the upward trend in peace, wealth, and health. I know that’s counterintuitive, and it seems to fly in the face of what we read in the news every day. Mind you, The New York Times took some time out recently to acknowledge the trend.

Ridley’s thesis—and it’s a compelling one—is that cooperation and trade are the drivers of progress. As I read through his historical accounts of the benefits of open borders and the cautionary tales of small-minded insular empires that collapsed, I remember thinking, “Boy, he must be pretty upset about Brexit—his own country choosing to turn its back on trade agreements with its neighbours so that it could became a small, petty island chasing the phantom of self-sufficiency”. (Self-sufficiency, or subsistence living, as Ridley rightly argues throughout the book, correlates directly with poverty.)

But throughout these accounts, there are constant needling asides pointing to the perceived enemies of trade and progress: bureaucrats and governments, with their pesky taxes and rule of law. As the accounts enter the twentieth century, the gloves come off completely revealing a pair of dyed-in-the-wool libertarian fists that Ridley uses to pummel any nuance or balance. “Ah,” I thought, “if he cares more about the perceived evils of regulation than the proven benefits of trade, maybe he might actually think Brexit is a good idea after all.”

It was an interesting moment. Given the conflicting arguments in his book, I could imagine him equally well being an impassioned remainer as a vocal leaver. I decided to collapse this probability wave with a quick Google search, and sure enough …he’s strongly in favour of Brexit.

In theory, an author’s political views shouldn’t make any difference to a book about technology and progress. In practice, they barge into the narrative like boorish gatecrashers threatening to derail it entirely. The irony is that while Ridley is trying to make the case for rational optimism, his own personal political feelings are interspersed like a dusting of irrationality, undoing his own well-researched case.

It’s not just the argument that suffers. Those are the moments when the writing starts to get frothy, if not downright unhinged. There were a number of confusing and ugly sentences that pulled me out of the narrative and made me wonder where the editor was that day.

The last time I remember reading passages of such poor writing in a non-fiction book was Nassim Nicholas Taleb’s The Black Swan. In the foreword, Taleb provides a textbook example of the Dunning-Kruger effect by proudly boasting that he does not need an editor.

But there was another reason why I thought of The Black Swan while reading The Rational Optimist.

While Ridley’s anti-government feelings might have damaged his claim to rationality, surely his optimism is unassailable? Take, for example, his conclusions on climate change. He doesn’t (quite) deny that climate change is real, but argues persuasively that it won’t be so bad. After all, just look at the history of false pessimism that litters the twentieth century: acid rain, overpopulation, the Y2K bug. Those turned out okay, therefore climate change will be the same.

It’s here that Ridley succumbs to the trap that Taleb wrote about in his book: using past events to make predictions about inherently unpredictable future events. Taleb was talking about economics—warning of the pitfalls of treating economic data as though it followed a bell-curve curve, when it fact it’s a power-law distribution.

Fine. That’s simply a logical fallacy, easily overlooked. But where Ridley really lets himself down is in the subsequent defence of fossil fuels. Or rather, in his attack on other sources of energy.

When recounting the mistakes of the naysayers of old, he points out that their fundamental mistake is to assume stasis. Hence their dire predictions of war, poverty, and famine. Ehrlich’s overpopulation scare, for example, didn’t account for the world-changing work of Borlaug’s green revolution (and Ridley rightly singles out Norman Borlaug for praise—possibly the single most important human being in history).

Yet when it comes to alternative sources of energy, they are treated as though they are set in stone, incapable of change. Wind and solar power are dismissed as too costly and inefficient. The Rational Optimist was written in 2008. Eight years ago, solar energy must have indeed looked like a costly investment. But things have changed in the meantime.

As Matt Ridley himself writes:

It is a common trick to forecast the future on the assumption of no technological change, and find it dire. This is not wrong. The future would indeed be dire if invention and discovery ceased.

And yet he fails to apply this thinking when comparing energy sources. If anything, his defence of fossil fuels feels grounded in a sense of resigned acceptance; a sense of …pessimism.

Matt Ridley rejects any hope of innovation from new ideas in the arena of energy production. I hope that he might take his own words to heart:

By far the most dangerous, and indeed unsustainable thing the human race could do to itself would be to turn off the innovation tap. Not inventing, and not adopting new ideas, can itself be both dangerous and immoral.

Taking an online book offline

Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.

There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.

Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.

I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.

Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to Let’s Encrypt Certbot—but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.

In this case, the Service Worker is using a straightforward bit of logic:

  • On installation, cache absolutely everything: HTML, CSS, images.
  • When anything is requested, grab it from the cache.
  • If it isn’t in the cache, try the network.
  • If the network doesn’t work, show an offline page (or image).

Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.

Which brings me back to the way AppCache is being deprecated…

The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.

In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…

At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”

You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:

In that same Mozilla thread, Soledad echoes Tom’s point:

As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.

I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…

Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.

I was genuinely shocked by this:

Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use display: browser to allow people to share URLs.

Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.

There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.

I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.

I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.

Machine supplying

I wrote a little something recently about some inspiring projects that people are working on. Like Matt’s Machine Supply project. There’s a physical side to that project—a tweeting book-vending machine in London—but there’s also the newsletter, 3 Books Weekly.

I was honoured to be asked by Matt to contribute three book recommendations. That newsletter went out last week. Here’s what I said…

The Victorian Internet by Tom Standage

A book about the history of telegraphy might not sound like the most riveting read, but The Victorian Internet is both fascinating and entertaining. Techno-utopianism, moral panic, entirely new ways of working, and a world that has been utterly transformed: the parallels between the telegraph and the internet are laid bare. In fact, this book made me realise that while the internet has been a great accelerator, the telegraph was one of the few instances where a technology could truly be described as “disruptive.”

Ancillary Justice: 1 (Imperial Radch) by Ann Leckie

After I finished reading the final Iain M. Banks novel I was craving more galaxy-spanning space opera. The premise of Ancillary Justice with its description of “ship minds” led me to believe that this could be picking up the baton from the Culture series. It isn’t. This is an entirely different civilisation, one where song-collecting and tea ceremonies have as much value as weapons and spacecraft. Ancillary Justice probes at the deepest questions of identity, both cultural and personal. As well as being beautifully written, it’s also a rollicking good revenge thriller.

The City & The City by China Miéville

China Miéville’s books are hit-and-miss for me, but this one is a direct hit. The central premise of this noir-ish tale defies easy description, so I won’t even try. In fact, one of the great pleasures of this book is to feel the way your mind is subtly contorted by the author to accept a conceit that should be completely unacceptable. Usually when a book is described as “mind-altering” it’s a way of saying it has drug-like properties, but The City & The City is mind-altering in an entirely different and wholly unique way. If Borges and Calvino teamed up to find The Maltese Falcon, the result would be something like this.

When I sent off my recommendations, I told Matt:

Oh man, it was so hard to narrow this down! So many books I wanted to mention: Station 11, The Peripheral, The Gone-Away World, Glasshouse, Foucault’s Pendulum, Oryx and Crake, The Wind-up Girl …this was so much tougher than I thought it was going to be.

And Matt said:

Tell you what — if you’d be up for writing recommendations for another 3 books, from those ones you mentioned, I’d love to feature those in the machine!


Station Eleven by Emily St. John Mandel

Station Eleven made think about the purpose of art and culture. If art, as Brian Eno describes it, is “everything that you don’t have to do”, what happens to art when the civilisational chips are down? There are plenty of post-pandemic stories of societal collapse. But there’s something about this one that sets it apart. It doesn’t assume that humanity will inevitably revert to an existence that is nasty, brutish and short. It’s also a beautifully-written book. The opening chapter completely sucker-punched me.

Glasshouse by Charles Stross

On the face of it, this appears to be another post-Singularity romp in a post-scarcity society. It is, but it’s also a damning critique of gamification. Imagine the Stanford prison experiment if it were run by godlike experimenters. Stross’s Accelerando remains the definitive description of an unfolding Singularity, but Glasshouse is the one that has stayed with me.

The Gone-Away World by Nick Harkaway

This isn’t an easy book to describe, but it’s a very easy book to enjoy. A delightful tale of a terrifying apocalypse, The Gone-Away World has plenty of laughs to balance out the existential dread. Try not to fall in love with the charming childhood world of the narrator—you know it can’t last. But we’ll always have mimes and ninjas.

I must admit, it’s a really lovely feeling to get notified on Twitter when someone buys one of the recommended books.

Making things happen

I have lovely friends who are making lovely things. Surprisingly, lots of these lovely things aren’t digital (or at least aren’t only digital).

My friends Brian and Joschi want to put on an ambitious event called Material:

A small conference based in Reykjavik, Iceland, looking into the concept of the Web as a Material — 22nd July 2016, https://material.is

They’re funding it through Kickstarter. If you have any interest in this at all, I suggest you back it. Best bet is to pledge the amount that guarantees you a ticket to the conference. Go!

My friend Matt has a newsletter called 3 Books Weekly to match his Machine Supply website. Each edition features three book recommendations chosen by a different person each time.

Here’s the twist: there’s going to be a Machine Supply pop-up bookshop AKA a vending machine in Shoreditch. That’ll be rolling out very soon and I can’t wait to see it.

My friend Josh made a crazy website to tie in with an art project called Cosmic Surgery. My friend Emily made a limited edition run of 10 books for the project. Now there’s a Kickstarter project to fund another run of books which will feature a story by Piers Bizony.

An Icelandic conference, a vending machine for handpicked books, and a pop-up photo book …I have lovely friends who are making lovely things.


Someone at Clearleft asked me a question recently about making bookmarklets. I have a bit of experience in that department. As well as making a bookmarklet for adding links to my own site, there’s the Huffduffer bookmarklet that’s been chugging away since 2008.

I told them that there are basically two approaches:

  1. Have the bookmarklet pop open a new browser window at your service, passing in the URL of the current page. Then do all the heavy lifting on your server.
  2. Have the bookmarklet inject JavaScript to analyse and edit the DOM of the document in the current browser window. All the heavy lifting is done directly in client-side JavaScript.

I favour the first approach. Partly that’s because it makes it easier to update the functionality. As you improve your server-side script, the bookmarklet functionality gets better automatically. But also, if your server-side script doesn’t do its magic, you can always fall back to letting the end user fill in the details.

Here’s an example…

When you click the Huffduffer bookmarklet, it pops open this URL:


…with that page parameter filled in with whatever page you currently have open. Let’s say I’ve got this page currently open in my browser:


If I press the Huffduffer bookmarklet, that will spawn a new window with this URL:


And that’s all it does. Now it’s up to that page on Huffduffer to figure out what to do with the URL it has been given. In this case, it makes a CURL request to figure out what to use as a title, what to use as a description, what audio file to use, etc. If it can’t figure that out, I can always fill in those fields myself by hand.

I could’ve chosen to get at that information by injecting JavaScript directly into the page open in the browser. But that’s somewhat invasive.

Brian Donohue wrote on Ev’s blog a while back about one of the problems with that approach. Sites that—quite correctly—have a strict Content Security Policy will object to having arbitrary JavaScript injected into their documents.

But remember this only applies to some bookmarklets. If a bookmarklet just spawns a new window—like Huffduffer’s—then there’s no problem. That approach to bookmarklets was dismissed with this justification:

The crux of the issue for bookmarklets is that web authors can control the origin of the JavaScript, network calls, and CSS, all of which are necessary for any non-trivial bookmarklet.

Citation needed. I submit that Huffduffer and Instapaper provide very similar services: “listen later” and “read later”. Both use cases could be described as “non-trivial”. But only one of the bookmarklets works on sites with strict CSPs.

Time and time again, I see over-engineered technical solutions that are built with the justification that “this problem is very complex therefore the solution needs to be complex” (yes, I am talking about web thangs that rely on complex JavaScript). In my experience, it’s exactly the opposite way around. The more complex the problem, the more important it is to solve it in the simplest way possible. It’s the only way of making sure the solution is resilient to unexpected scenarios.

The situation with bookmarklets is a perfect example. It’s not just an issue with strict Content Security Policies either. I’ve seen JavaScript-injecting bookmarklets fail because someone has set their browser cookie preferences to only accept cookies from the originating server.

Bookmarklets are not dead. They may, however, be pining for the fjords. Nobody has a figured out a way to get bookmarklets to work on mobile. Now that might well be a death sentence.

New edition

Six years ago I wrote a book and the brand new plucky upstart A Book Apart published it.

Six years! That’s like a geological age in internet years.

People liked the book. That’s very gratifying. I’m quite proud of it, and it always gives me a warm glow when someone tells me they enjoyed reading it.

Jeffrey asked me a while back about updating the book for a second edition—after all, six years is crazy long time for a web book to be around. I said no, because I just wouldn’t have the time, but mostly because—as the old proverb goes—you can step in the same river twice. Proud as I am of HTML5 For Web Designers, I consider it part of my past.

“What about having someone else update it?” Well, that made me nervous. I feel quite protective of my six year old.

“What about Rachel Andrew?” Ah, well, that’s a different story! Absolutely—if there’s one person I trust to bring the up to date, it’s Rachel.

She’s done a fine, fine job. The second edition of HTML5 For Web Designers is now available.

I know what you’re going to ask: how much difference is there between the two editions? Well, in the introduction to the new edition, I’m very pleased to say that Rachel has written:

I’ve been struck by how much has remained unchanged in that time.

There’s a new section on responsive images. That’s probably the biggest change. The section on video has been expanded to include captioning. There are some updates and tweaks to the semantics of some of the structural elements. So it’s not a completely different book; it’s very much an update rather than a rewrite.

If you don’t have a copy of HTML5 For Web Designers and you’ve been thinking that maybe it’s too out-of-date to bother with, rest assured that it is now bang up to date thanks to Rachel.

Jeffrey has written a lovely new foreword for the second edition:

HTML5 for Web Designers is a book about HTML like Elements of Style is a book about commas. It’s a book founded on solid design principles, and forged at the cutting edge of twenty-first century multi-device design and development.

Metadata markup

When something on your website is shared on Twitter or Facebook, you probably want a nice preview to appear with it, right?

For Twitter, you can use Twitter cards—a collection of meta elements you place in the head of your document.

For Facebook, you can use the grandiosely-titled Open Graph protocol—a collection of meta elements you place in the head of your document.

What’s that you say? They sound awfully similar? Why, no! I mean, just look at the difference. Here’s how you’d mark up a blog post for Twitter:

<meta name="twitter:url" content="https://adactio.com/journal/9881">
<meta name="twitter:title" content="Metadata markup">
<meta name="twitter:description" content="So many standards to choose from.">
<meta name="twitter:image" content="https://adactio.com/icon.png">

Whereas here’s how you’d mark up the same blog post for Facebook:

<meta property="og:url" content="https://adactio.com/journal/9881">
<meta property="og:title" content="Metadata markup">
<meta property="og:description" content="So many standards to choose from.">
<meta property="og:image" content="https://adactio.com/icon.png">

See? Completely different.

Okay, I’ll attempt to dial down my sarcasm, but I find this wastage annoying. It adds unnecessary complexity, which in turn, I suspect, puts a lot of people off even trying to implement this stuff. In short: 927.

We’ve seen this kind of waste before. I remember when Netscape and Microsoft were battling it out in the browser wars: Internet Explorer added a proprietary acronym element, while Netscape added the abbr element. They both basically did the same thing. For years, Internet Explorer refused to implement the abbr element out of sheer spite.

A more recent example of the negative effects of competing standards was on display at this year’s Edge conference in London. In a session on front-end data, Nolan Lawson decried the fact that developers weren’t making more use of the client-side storage options available in browsers today. After all, there are so many to choose from: LocalStorage, WebSQL, IndexedDB…

(Hint: if developers aren’t showing much enthusiasm for the latest and greatest API which is sooooo much better than the previous APIs they were also encouraged to use at the time, perhaps their reticence is understandable.)

Anyway, back to metacrap.

Matt has written a guide to what you need to do in order to get a preview of your posts to appear in Slack. Fortunately the answer is not yet another collection of meta elements to place in the head of your document. Instead, Slack piggybacks on the existing combatants: oEmbed, Twitter Cards, and Open Graph.

So to placate both Twitter and Facebook (with Slack thrown in for good measure), your metadata markup is supposed to look something like this:

<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="@adactio">
<meta name="twitter:url" content="https://adactio.com/journal/9881">
<meta name="twitter:title" content="Metadata markup">
<meta name="twitter:description" content="So many standards to choose from.">
<meta name="twitter:image" content="https://adactio.com/icon.png">
<meta property="og:url" content="https://adactio.com/journal/9881">
<meta property="og:title" content="Metadata markup">
<meta property="og:description" content="So many standards to choose from.">
<meta property="og:image" content="https://adactio.com/icon.png">

There are two things on display here: redundancy, and also, redundancy.

Now the eagle-eyed amongst you will have spotted a crucial difference between the Twitter metacrap and the Facebook metacrap. The Twitter metacrap uses the name attribute on the meta element, whereas the Facebook metacrap uses the property attribute. Technically, there is no property attribute in HTML—it’s an RDFa thing. But the fact that they’re using two different attributes means that we can squish the meta elements together like this:

<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="@adactio">
<meta name="twitter:url" property="og:url" content="https://adactio.com/journal/9881">
<meta name="twitter:title" property="og:title" content="Metadata markup">
<meta name="twitter:description" property="og:description" content="So many standards to choose from.">
<meta name="twitter:image" property="og:image" content="https://adactio.com/icon.png">

There. I saved you at least a little bit of typing.

The metacrap situation is even more ridiculous for “add to homescreen”/”pin to start”/whatever else browser makers can’t agree on…


<meta name="msapplication-starturl" content="https://adactio.com" />
<meta name="msapplication-window" content="width=800;height=600">
<meta name="msapplication-tooltip" content="Kill me now...">


<link rel="apple-touch-icon" href="https://adactio.com/icon.png">

(Repeat four or five times with different variations of icon sizes, and be sure to create icons with new sizes after every. single. Apple. keynote.)

Fortunately Google, Opera, and Mozilla appear to be converging on using an external manifest file:

<link rel="manifest" href="https://adactio.com/manifest.json">

Perhaps our long national nightmare of balkanised metacrap is finally coming to an end, and clearer heads will prevail.

Recently speculative

I was a guest on the Boagworld podcast—neither Andy nor Richard were available so Paul and Marcus were stuck with me. We talked boring business stuff, but only after an extended—and much more interesting—preamble wherein we chatted about sci-fi books.

When prompted for which books I would recommend, I was able to instantly recall some recent reads, but inevitably I forgot to mention some others. I’m not sure if I even mentioned William Gibson’s The Peripheral, an unsurprisingly excellent book.

I’m pretty sure I mentioned The Girl In The Road. It has a magical realism quality to it that reminded me a bit of Lauren’s Zoo City. Its African/Indian setting makes for a refreshing change. Having said that, I still haven’t read Ian McDonald’s Indian-set River Of Gods or Cyberabad Days, both of which are sitting on my bookshelf alongside McDonald’s Out On Blue Six, which I have read and can heartily recommend—its imagining of a society where the algorithm decides the fate of all feels very ahead of its time.

One book I recommended without hesitation was Station Eleven. Maybe it was because I read it right after reading a book I found to be so-so—Paul McAuley’s Something Coming Through—but the writing in Station Eleven sucker-punched me right from the first chapter. Have a listen to the Boagworld podcast episode for some more ramblings on why I liked it.

Somehow I managed not to mention Ann Leckie’s Ancillary Justice and Ancillary Sword. That’s unforgivable. They are easily amongst the best works of sci-fi I’ve read in a read long time. It feels quite exciting to be anticipating the third part in what will clearly be a long-time classic series, right up there with the all-time greats.

I first came across Ancillary Justice through some comparisons that were being made to Iain M. Banks’s Culture novels. I was reading his final work, The Hydrogen Sonata, trying to take it slow, knowing that there would be no further books from that universe. But I ended up tearing through it because it was damned enjoyable (not necessarily brilliantly-written, mind; like most of Banks’s books, it’s a terrific and thought-provoking romp but missing the hand of a sterner editor). Anyway, I heard there were some similarities to the Ship Minds to be found in Leckie’s debut novel so I gave it a whirl. As it turns out, there are very few similarities and that’s all for the best. The universe that Leckie is describing has a very different but equally compelling richness.

I read Jeff Vandermeer’s Southern Reach trilogy—Annihilation, Authority, and Acceptance—and while I can’t say I enjoyed them as such, I can recommend them …though they are insidiously disturbing, dripping with atmosphere. I’m very intrigued by the news that Alex Garland is working on a screenplay.

So if you’re looking for some good recent speculative fiction, try:

Alongside the newer stuff, I’ve been catching up with some golden oldies in the form of tattered second-hand novels like Joe Haldeman’s The Forever War, Stanisław Lem’s The Futurological Congress, and Brian Aldiss’s Hothouse. I’m currently working my way through Neal Stephenson’s Seveneves and loving every minute of it.


When I give talks or workshops, I sometimes get a bit ranty. One of the richest seams of rantiness comes from me complaining about how we web designers and developers are responsible for making the web a hostile place. “Stop getting the web wrong!” I might shout, like an old man yelling at a cloud. I point to services like Instapaper and Readability and describe their existence as a damning indictment of our work.

Don’t get me wrong—I really like Instapaper, Readability, RSS readers, or any other tools that allow people to read what they want when they want it. But think about their fundamental selling point: get to the content you want without having to wade through the cruft. That cruft was put there by us.

So-called modern web design and development is damage that people have to route around.

(Ooh, I can feel myself coming over all ranty and angry again! Calm down, Jeremy, calm down!)

And. Breathe.

Now there’s a new tool to the add to the list: Facebook Instant. Again, I think it’s actually pretty great that this service exists. But once again, it should make us ashamed of the work we’re collectively producing.

In this case, the service is—somewhat ironically—explicitly touting the performance benefits of not going to a website to read an article. Quite right.

PPK points to tools as the source of the problem and Marco Arment agrees:

The entire culture dominant among web developers today is bizarrely framework-heavy, with seemingly no thought given to minimizing dependencies and page weight.

But I think it’s a bit more subtle than that. As John Gruber says:

Business development deals have created problems that no web developer can solve. There’s no way to make a web page with a full-screen content-obscuring ad anything other than a shitty experience.

Now you might be saying to yourself “Well, I’ve never made a bloated web page!” or “I’ve never slapped loads of intrusive crap over the content!” I’d certainly like to think that I can look at my track record and hold my head up reasonably high. But that doesn’t matter. If the overall perception is that going to a URL to read an article is a pain in the ass, it hurts all of us.

Take this article from M.G. Siegler:

Not only is the web not fast enough for apps, it’s not fast enough for text either. …on mobile, the web browser just isn’t cutting it. … Native apps provide a better user experience on mobile than a web browser.

On the face of it, this is kind of a bizarre claim. After all, there’s nothing inherent in web browsers that makes them slow at rendering text—quite the opposite! And native apps still use HTTP (and often HTML) to fetch content; the network doesn’t suddenly get magically faster just because the piece of software requesting a resource doesn’t happen to be a web browser.

But this conflation of slow websites and slow web browsers is perfectly understandable. If it looks like a slow duck, and it quacks like a slow duck, then why not conclude that ducks are slow? Even if we know that there’s nothing inherently slow about making web pages:

My hope is that Facebook Instant will shake things up a bit. M.G. Siegler again:

At the very least, Facebook has put everyone else on notice. Your content better load fast or you’re screwed. Publication websites have become an absolutely bloated mess. They range from beautiful (The Verge) to atrocious (Bloomberg) to unusable (Forbes). The common denominator: they’re all way too slow.

There needs to be a cultural change in how we approach building for the web. Yes, some of the tools we choose are part of the problem, but the bigger problem is that performance still isn’t being recognised as the most important factor in how people feel about websites (and by extension, the web). This isn’t just a developer issue. It’s a design issue. It’s a UX issue. It’s a business issue. Performance is everybody’s collective responsibility.

I’d better stop now before I start getting all ranty again.

I’ll leave you with some other writings on this topic…

Tim Kadlec talks about choosing performance:

It’s not because of any sort of technical limitations. No, if a website is slow it’s because performance was not prioritized. It’s because when push came to shove, time and resources were spent on other features of a site and not on making sure that site loads quickly.

Jim Ray points out that “we learned the wrong lesson from the rise of mobile and the app ecosystem”:

We’ve spent far too long trying to compete with native experiences by making our websites look and behave like apps. This includes not just thousands of lines of JavaScript to mimic native app swipes and scrolling but even the lower overhead aesthetics of fixed position headers and persistent navigation.


Finally, Baldur Bjarnason has written a terrific piece:

The web doesn’t suck. Your websites suck.

All of your websites suck.

You destroy basic usability by hijacking the scrollbar. You take native functionality (scrolling, selection, links, loading) that is fast and efficient and you rewrite it with ‘cutting edge’ javascript toolkits and frameworks so that it is slow and buggy and broken. You balloon your websites with megabytes of cruft. You ignore best practices. You take something that works and is complementary to your business and turn it into a liability.

The lousy performance of your websites becomes a defensive moat around Facebook.

Go read the whole thing—it’s terrific:

This is a long-standing debate. Except it’s only long-standing among web developers. Columnists, managers, pundits, and journalists seem to have no interest in understanding the technical foundation of their livelihoods. Instead they are content with assuming that Facebook can somehow magically render HTML over HTTP faster than anybody else and there is nothing anybody can do to make their crap scroll-jacking websites faster. They buy into the myth that the web is incapable of delivering on its core capabilities: delivering hypertext and images quickly to a diverse and connected readership.

100 words 044

It was Clearleft’s turn to host Codebar again this evening. As always, it was great. I did my best to introduce some people to HTML and CSS, which was challenging, rewarding, and fun.

In the run-up to the event, I did a little spring cleaning of Clearleft’s bookshelves. I took some books on HTML, CSS, and JavaScript that weren’t being used any more and offered them to Codebar students for the taking.

I was also able to offer some more contemporary books thanks to the generosity of A Book Apart who kindly donated some of their fine volumes to Codebar.

100 words 043

It wasn’t that long ago I mentioned a new book about learning HTML and CSS from scratch, published online for free. Well, now there’s another one. This time the subject is typography on the web.

It’s called Professional Web Typography by Donny Truong. It’s a fairly quick read but it squeezes in plenty of handy practical advice with chapters on delivering web fonts, selecting body text, setting type in the browser, choosing headings, picking type for UI, seeing typographic details, and practicing typography.

Needless to say, the book is itself very nicely typeset. And you can pay what you want.

100 words 030

Andy Parker kindly deposited a couple of books on my desk recently. One was The Martian. I had already read that one, thanks to Tim Kadlec’s recommendation. The other was the much-hyped Ready Player One.

I read it while I was travelling to and from Bulgaria. It was the ideal travel companion—an airport novel for geeks. It’s not exactly the finest prose ever written, but it’s thoroughly enjoyable popcorn entertainment. It reads like fan fiction and I mean that in a good way. It’s like Scott Pilgrim crossed with Snowcrash. It certainly passed the time on some airplane rides.

100 words 025

I often get asked what resources I’d recommend for someone totally new to making websites. There are surprisingly few tutorials out there aimed at the complete beginner. There’s Jon Duckett’s excellent—and beautiful—book. There’s the Codebar curriculum (which I keep meaning to edit and update; it’s all on Github).

Now there’s a new resource by Damian Wielgosik called How to Code in HTML5 and CSS3. Personally, I would drop the “5” and the “3”, but that’s a minor quibble; this is a great book. It manages to introduce concepts in a logical, understandable way.

And it’s free.

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Huffduffing video

You know what would be awesome? If you could huffduff the audio from videos on YouTube, Vimeo, and other video hosting sites.

To give you an example, A List Apart recently started running online events and once the events are over, they pop ‘em onto YouTube. Now, I’m not saying I don’t want to look at those lovely faces for an hour, but if truth be told, it’s the audio that I’m really interested in.

In the past, my only recourse would’ve been to pester the good people at A List Apart to export audio as well as video, in much the same way as I’ve pestered conference organisers in the past:

I wish conference organisers would export the audio of any talks that they’re publishing as video. Creating the sound file at that point is a simple one-click step. But once the videos are up online—be it on YouTube or Vimeo—it’s a lot, lot harder to get just the audio.

Not everyone wants to watch video. In fact, I bet there are plenty of people who listen to conference talks by opening the video in a separate tab so they can listen to it while they do something else.

Some people have come up with clever workarounds to get the audio track from videos into Huffduffer but they’re fairly convoluted.

Until now!

The brilliant Ryan Barrett has just launched huffduff-video:

He has created a bookmarklet you can use whenever you’re on a YouTube or Vimeo page that you want to huffduff. It works a treat—I’ve already used to huffduff that A List Apart event and a talk by Matt Jones.

It takes a little while to do the initial conversion but you can just leave the pop-up window open while it works its magic. I’ve incorporated it into the Huffduffer bookmarklet now too. So if you’re on a YouTube or Vimeo page, you can hit the usual bookmarklet and it’ll route you through Ryan’s clever creation.

That’s makes two fantastic pieces of software from Ryan that have improved my online life immeasurably: first Bridgy and now huffduff-video. So nifty!


Ajax was a really big deal six, seven, eight years ago. My second book was all about Ajax. I spoke about Ajax at conferences and gave workshops all about using Ajax and progressive enhancement.

During those workshops, I would often point out that Ajax had the potential to be abused terribly. Until the advent of Ajax, it was very clear to a user when data was being submitted to a server: you’d have to click a link or submit a form. As soon as you introduce asynchronous communication, it’s possible for the server to get information from the client even without a full-page refresh.

Imagine, for example, that you’re typing a message into a textarea. You might begin by typing, “Why, you stuck up, half-witted, scruffy-looking nerf…” before calming down and thinking better of it. Before Ajax, there was no way that what you had typed could ever reach the server. But now, it’s entirely possible to send data via Ajax with every key press.

It was just a thought experiment. I wasn’t actually that worried that anyone would ever do something quite so creepy.

Then I came across this article by Jennifer Golbeck in Slate all about Facebook tracking what’s entered—but then erased—within its status update form:

Unfortunately, the code that powers Facebook still knows what you typed—even if you decide not to publish it. It turns out that the things you explicitly choose not to share aren’t entirely private.

Initially I thought there must have been some mistake. I erronously called out Jen Golbeck when I found the PDF of a paper called The Post that Wasn’t: Exploring Self-Censorship on Facebook. The methodology behind the sample group used for that paper was much more old-fashioned than using Ajax:

First, participants took part in a weeklong diary study during which they used SMS messaging to report all instances of unshared content on Facebook (i.e., content intentionally self-censored). Participants also filled out nightly surveys to further describe unshared content and any shared content they decided to post on Facebook. Next, qualified participants took part in in-lab interviews.

But the Slate article was referencing a different paper that does indeed use Ajax to track instances of deleted text:

This research was conducted at Facebook by Facebook researchers. We collected self-censorship data from a random sample of approximately 5 million English-speaking Facebook users who lived in the U.S. or U.K. over the course of 17 days (July 6-22, 2012).

So what I initially thought was a case of alarmism—conflating something as simple as simple as a client-side character count with actual server-side monitoring—turned out to be a pretty accurate reading of the situation. I originally intended to write a scoffing post about Slate’s linkbaiting alarmism (and call it “The shocking truth behind the latest Facebook revelation”), but it turns out that my scoffing was misplaced.

That said, the article has been updated to reflect that the Ajax requests are only sending information about deleted characters—not the actual content. Still, as we learned very clearly from the NSA revelations, there’s not much practical difference between logging data and logging metadata.

The nerds among us may start firing up our developer tools to keep track of unexpected Ajax requests to the server. But what about everyone else?

This isn’t the first time that the power of JavaScript has been abused. Every browser now ships with an option to block pop-up windows. That’s because the ability to spawn new windows was so horribly misused. Maybe we’re going to see similar preference options to avoid firing Ajax requests on keypress.

It would be depressingly reductionist to conclude that any technology that can be abused will be abused. But as long as there are web developers out there who are willing to spawn pop-up windows or force persistent cookies or use Ajax to track deleted content, the depressingly reductionist conclusion looks like self-fulfilling prophecy.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

One moment

I use my walk to and from work every day as an opportunity to catch up on my Huffduffer podcast. Today I started listening to a talk I’ve really been looking forward to. It’s a Long Now seminar called Universal Access To All Knowledge by one of my heroes: Brewster Kahle, founder of The Internet Archive.

Brewster Kahle: Universal Access to All Knowledge — The Long Now on Huffduffer

As expected, it’s an excellent talk. I caught the start of it on my walk in to work this morning and I picked up where I left off on my walk home this evening. In fact, I deliberately didn’t get the bus home—despite the cold weather—so that I’d get plenty of listening done.

Round about the 23 minute mark he starts talking about Open Library, the fantastic project that George worked on to provide a web page for every book. He describes how it works as a lending library where an electronic version of a book can be checked out by one person at a time:

You can click on: hey! there’s this HTML5 For Web Designers. We bought this book—we bought this book from a publisher such that we could lend it. So you can say “Oh, I want to borrow this book” and it says “Oh, it’s checked out.” Darn! And you can add it to your list and remind yourself to go and get it some other time.

Holy crap! Did Brewster Kahle just use my book to demonstrate Open Library‽

It literally stopped me in my tracks. I stopped walking and stared at my phone, gobsmacked.

It was a very surreal moment. It was also a very happy moment.

Now I’m documenting that moment—and I don’t just mean on a third-party service like Twitter or Facebook. I want to be able to revisit that moment in the future so I’m documenting it at my own URL …though I’m very happy that the Internet Archive will also have a copy.

HTML5 For Web Designers

I’ve just finished speaking at An Event Apart in Washington DC (well, technically it’s in Alexandria, Virginia but let’s not quibble over details).

I was talking about design principles, referencing a lot of the stuff that I’ve gathered together at principles.adactio.com. I lingered over the HTML design principles and illustrated them with examples from HTML5.

It’s been a year and a half now since HTML5 For Web Designers was released and I figured it was about time that it should be published in its natural format: HTML.

Ladies and gentlemen, I give you: HTML5forWebDesigners.com.

Needless to say, it’s all written in HTML5 making good use of some of the new semantic elements like section, nav and figure. It’s also using some offline storage in the shape of appcache. So if you visit the site with a browser that supports appcache, you’ll be able to browse it any time after that even if you don’t have an internet connection (and if you’re trying it on an iOS device, feel free to add it to your home screen so it’s always within easy reach).

You can read it on a desktop browser. You can read it in a mobile browser. You can read it in Lynx if you want. You can print it out. You can read it on the Kindle browser. You can read it on a tablet.

And if you like what you read and you decide you want to have a physical souvenir, you can buy the book and read it on paper.

HTML5 For Web Designers


I like my Kindle. I mean, I hate the DRM and the ludicrous overpriced badly-typeset books but I really like having a browser with a free internet connection just about anywhere in the world.

The Kindle is a particularly handy device when travelling. I can load it up with science fiction and popular science books without weighing down my carry-on luggage.

But when travelling by plane, there are two points in the journey when the Kindle must be stowed. Even though it’s using e-ink, it is technically an electronic device so it must be switched off for take-off and landing. So I still find myself packing some good old-fashioned paper in my bag.

I noticed that almost all of the printed items I’ve been travelling with aren’t available from bricks’n’mortar shops. These books are generated by the internet.

Books generated by the internet

Adaptive Web Design

Aaron’s book is a great read: nice and short but with plenty of meaty hands-on practical stuff. If you haven’t bought it yet, go ahead and read the first chapter to get a taste for the quality of the writing.

Everything published by A Book Apart

I’ll admit that I’m biased because I wrote the first book and penned the foreword for the most recent one, but c’mon: these little beauties are perfect for travelling with.

Back in March when I was bouncing around within the States, Mandy gave me a copy of Erin’s brand new Elements Of Content Strategy at the start of my trip in Austin. By the time I got to the Pacific Northwest later that month, I had finished the book …just from reading it during aircraft ascents and descents.

Six-Penny Anthems II

Kevin’s somewhat-twisted sense of humour appeals to me. A lot. Six-Penny Anthems II is a great hodge-podge of his cartoons.

I distinctly remember reading this during the landing at the end of a transatlantic flight and giggling uncontrollably to myself. I may have worried my fellow passengers.


Actually, I’m not sure if this excellent collaboration between Warren Ellis, Matt Brooker and the BERG gang is suitable for take-off and landing. That’s because the accompanying ultra-violet light is technically an electronic device. But you should definitely get your hands on it.

The Manual

If you fancy some thoughtful reading material delivered in a beautiful vessel, be sure to get your hands on the first issue of Andy’s creation. Each essay is written by a web professional but you’ll find no talk of software or hardware.

I’m flying across the Atlantic to New York tomorrow for Brooklyn Beta, which I’m looking forward to immensely. I’ll have my Kindle with me for the flight. I’ll also be bringing one of those artefacts of the network with me.