Tags: future

29

sparkline

The Rational Optimist

As part of my ongoing obsession with figuring out how we evaluate technology, I finally got around to reading Matt Ridley’s The Rational Optimist. It was an exasperating read.

On the one hand, it’s a history of the progress of human civilisation. Like Steven Pinker’s The Better Angels Of Our Nature, it piles on the data demonstrating the upward trend in peace, wealth, and health. I know that’s counterintuitive, and it seems to fly in the face of what we read in the news every day. Mind you, The New York Times took some time out recently to acknowledge the trend.

Ridley’s thesis—and it’s a compelling one—is that cooperation and trade are the drivers of progress. As I read through his historical accounts of the benefits of open borders and the cautionary tales of small-minded insular empires that collapsed, I remember thinking, “Boy, he must be pretty upset about Brexit—his own country choosing to turn its back on trade agreements with its neighbours so that it could became a small, petty island chasing the phantom of self-sufficiency”. (Self-sufficiency, or subsistence living, as Ridley rightly argues throughout the book, correlates directly with poverty.)

But throughout these accounts, there are constant needling asides pointing to the perceived enemies of trade and progress: bureaucrats and governments, with their pesky taxes and rule of law. As the accounts enter the twentieth century, the gloves come off completely revealing a pair of dyed-in-the-wool libertarian fists that Ridley uses to pummel any nuance or balance. “Ah,” I thought, “if he cares more about the perceived evils of regulation than the proven benefits of trade, maybe he might actually think Brexit is a good idea after all.”

It was an interesting moment. Given the conflicting arguments in his book, I could imagine him equally well being an impassioned remainer as a vocal leaver. I decided to collapse this probability wave with a quick Google search, and sure enough …he’s strongly in favour of Brexit.

In theory, an author’s political views shouldn’t make any difference to a book about technology and progress. In practice, they barge into the narrative like boorish gatecrashers threatening to derail it entirely. The irony is that while Ridley is trying to make the case for rational optimism, his own personal political feelings are interspersed like a dusting of irrationality, undoing his own well-researched case.

It’s not just the argument that suffers. Those are the moments when the writing starts to get frothy, if not downright unhinged. There were a number of confusing and ugly sentences that pulled me out of the narrative and made me wonder where the editor was that day.

The last time I remember reading passages of such poor writing in a non-fiction book was Nassim Nicholas Taleb’s The Black Swan. In the foreword, Taleb provides a textbook example of the Dunning-Kruger effect by proudly boasting that he does not need an editor.

But there was another reason why I thought of The Black Swan while reading The Rational Optimist.

While Ridley’s anti-government feelings might have damaged his claim to rationality, surely his optimism is unassailable? Take, for example, his conclusions on climate change. He doesn’t (quite) deny that climate change is real, but argues persuasively that it won’t be so bad. After all, just look at the history of false pessimism that litters the twentieth century: acid rain, overpopulation, the Y2K bug. Those turned out okay, therefore climate change will be the same.

It’s here that Ridley succumbs to the trap that Taleb wrote about in his book: using past events to make predictions about inherently unpredictable future events. Taleb was talking about economics—warning of the pitfalls of treating economic data as though it followed a bell-curve curve, when it fact it’s a power-law distribution.

Fine. That’s simply a logical fallacy, easily overlooked. But where Ridley really lets himself down is in the subsequent defence of fossil fuels. Or rather, in his attack on other sources of energy.

When recounting the mistakes of the naysayers of old, he points out that their fundamental mistake is to assume stasis. Hence their dire predictions of war, poverty, and famine. Ehrlich’s overpopulation scare, for example, didn’t account for the world-changing work of Borlaug’s green revolution (and Ridley rightly singles out Norman Borlaug for praise—possibly the single most important human being in history).

Yet when it comes to alternative sources of energy, they are treated as though they are set in stone, incapable of change. Wind and solar power are dismissed as too costly and inefficient. The Rational Optimist was written in 2008. Eight years ago, solar energy must have indeed looked like a costly investment. But things have changed in the meantime.

As Matt Ridley himself writes:

It is a common trick to forecast the future on the assumption of no technological change, and find it dire. This is not wrong. The future would indeed be dire if invention and discovery ceased.

And yet he fails to apply this thinking when comparing energy sources. If anything, his defence of fossil fuels feels grounded in a sense of resigned acceptance; a sense of …pessimism.

Matt Ridley rejects any hope of innovation from new ideas in the arena of energy production. I hope that he might take his own words to heart:

By far the most dangerous, and indeed unsustainable thing the human race could do to itself would be to turn off the innovation tap. Not inventing, and not adopting new ideas, can itself be both dangerous and immoral.

A wager on the web

Jason has written a great post about progressive web apps. It’s also a post about whether fears of the death of the web are justified.

Lately, I vacillate on whether the web is endangered or poised for a massive growth due to the web’s new capabilities. Frankly, I think there are indicators both ways.

So he applies Pascal’s wager. The hypothesis is that the web is under threat and progressive web apps are a solution to fighting that threat.

  • If the hypothesis is incorrect and we don’t build progressive web apps, things continue as they are on the web (which is not great for users—they have to continue to put up with fragile, frustratingly slow sites).
  • If the hypothesis is incorrect and we do build progressive web apps, users get better websites.
  • If the hypothesis is correct and we do build progressive web apps, users get better websites and we save the web.
  • If the hypothesis is correct and we don’t build progressive web apps, the web ends up pining for the fjords.

Whether you see the web as threatened or see Chicken Little in people’s fears and whether you like progressive web apps or feel it is a stupid Google marketing thing, we can all agree that putting energy into improving the experience for the people using our sites is always a good thing.

Jason is absolutely correct. There are literally no downsides to us creating progressive web apps. Everybody wins.

But that isn’t the question that people have been tackling lately. None of these (excellent) blog posts disagree with the conclusion that building progressive web apps as originally defined would be a great move forward for the web:

The real question that comes out of those posts is whether it’s good or bad for the future of progressive web apps—and by extension, the web—to build stop-gap solutions that use some progressive web app technologies (Service Workers, for example) while failing to be progressive in other ways (only working on mobile devices, for example).

In this case, there are two competing hypotheses:

  1. In the short term, it’s okay to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because over time they’ll get improved and we’ll end up with proper progressive web apps in the long term.
  2. In the short term, we should build proper progressive web apps, and it’s a really bad idea to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because that encourages more people to build sub-par websites and progressive web apps become synonymous with door-slamming single-page apps in the long term.

The second hypothesis sounds pessimistic, and the first sounds optimistic. But the people arguing for the first hypothesis aren’t coming from a position of optimism. Take Christian’s post, for example, which I fundamentally disagree with:

End users deserve to have an amazing, form-factor specific experience. Let’s build those.

I think end users deserve to have an amazing experience regardless of the form-factor of their devices. Christian’s viewpoint—like Alex’s tweetstorm—is rooted in the hypothesis that the web is under threat and in danger. The conclusion that comes out of that—building mobile-only JavaScript-reliant progressive web apps is okay—is a conclusion reached through fear.

Never make any decision based on fear.

dConstruct 2015 podcast: Nick Foster

dConstruct 2015 is just ten days away. Time to draw the pre-conference podcast to a close and prepare for the main event. And yes, all the talks will be recorded and released in podcast form—just as with the previous ten dConstructs.

The honour of the final teaser falls to Nick Foster. We had a lovely chat about product design, design fiction, Google, Nokia, Silicon Valley and Derbyshire.

I hope you’ve enjoyed listening to these eight episodes. I had certainly had a blast recording them. They’ve really whetted my appetite for dConstruct 2015—I think it’s going to be a magnificent day.

With the days until the main event about to tick over into single digits, this is your last chance to grab a ticket if you haven’t already got one. And remember, as a loyal podcast listener, you can use the discount code ‘ansible’ to get 10% off.

See you in the future …next Friday!

dConstruct 2015 podcast: Brian David Johnson

The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.

Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.

dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.

Grab your ticket today and use the code ‘ansible’ to take advantage of the 10% discount for podcast listeners.

dConstruct 2015 podcast: John Willshire

The latest dConstruct 2015 podcast episode is ready for your aural pleasure. This one’s a bit different. John Willshire came down to Brighton so that we could have our podcast chat face-to-face instead of over Skype.

It was fascinating to see the preparation that John is putting into his talk. He had labelled cards strewn across the table, each one containing a strand that he wants to try to weave into his talk. They also made for great conversation starters. That’s how we ended up talking about Interstellar and Man Of Steel, and the differing parenting styles contained therein. I don’t think I’ll ever be able to rid myself of the mental image of a giant holographic head of Michael Caine dispensing words of wisdom to in the Fortress Of Solitude. “Rage, rage against the dying of the light, Kal-el!”

The sound quality of this episode is more “atmospheric”, given the recording conditions (you can hear Clearlefties and seagulls in the background) but a splendid time was had by both John and myself. I hope that you enjoy listening to it.

I have a feeling that after listening to this, you’re definitely going to want to see John’s dConstruct talk, so grab yourself a ticket, using the discount code ‘ansible’ to get 10% off.

dConstruct 2015 podcast: Josh Clark

On Monday, I launched a new little experiment—a podcast series of interviews with the lovely people who will be speaking at this year’s dConstruct. I’m very much looking forward to the event (it presses all my future-geekery buttons) and talking to the speakers ahead of time is just getting me even more excited.

I’m releasing the second episode of the podcast today. It’s a chat with the thoroughly charming Josh Clark. We discuss technology, magic, Harry Potter, and the internet of things.

If you want to have this and future episodes delivered straight to your earholes, subscribe to the podcast feed.

And don’t forget: as a loyal podcast listener, you get 10% off the ticket price of dConstruct. Use the discount code “ansible”. You’re welcome.

Podcasting the future

I’m very proud of the three dConstructs I put together: 2012, 2013, and 2014, but I don’t have the fortitude to do it indefinitely so I’m stepping back from the organisational duties this year. So dConstruct 2015 is in Andy’s hands.

Of course he’s only gone and organised exactly the kind of conference that I’d feed my own grandmother to the ravenous bugblatter beast of Traal to attend. I mean, the theme is Designing The Future, for crying out loud!

To say I’m looking forward to hearing what all those great speakers have to say is something of an understatement. In fact, I couldn’t wait until September. I’ve started pestering them already.

On the off-chance that other people might be interesting in hearing me prod, cajole, and generally geek out about technology, sci-fi, and futurism, I’m taking the liberty of recording our conversations.

That’s right: there’s a podcast.

The episodes will be about half an hour so in length, sometimes longer, sometimes shorter. There’s no set format or agenda. It’s all very free-form, which is a polite way of saying that I’m completely winging it.

The first episode features the magnificent Matt Novak, curator of the Paleofuture blog. We talk about past visions of the future, the boom and bust cycles of utopias and dystopias, the Jetsons, 2001: A Space Odyssey, and the Apollo programme.

If you like what you hear, you can subscribe to the podcast feed.

Needless to say, you should come to this year’s dConstruct on September 11th here in Brighton. As compensation for listening to my experiments in podcasting, I’m going to sweeten the deal. Use the discount code “ansible” to get 10% off the ticket price. Aw, yeah!

100 words 073

The future Earth we see in Interstellar is a post-apocalyptic society. The population of the planet has been reduced to just a fraction of its current level. There have been wars and food shortages. And now the planet is dying and the human race is on its way out.

But instead of showing a dog-eat-dog battle for survival in the wasteland, we see people just getting on. It goes against the conventional wisdom that presupposes that if our Hobbesian Leviathian of civilisation were to be destroyed, our lives would inevitably revert to being nasty, brutish and short.

Hope

Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Ordinary plenty

Aaron asked a while back “What do we own?”

I love the idea of owning your content and then syndicating it out to social networks, photo sites, and the like. It makes complete sense… Web-based services have a habit of disappearing, so we shouldn’t rely on them. The only Web that is permanent is the one we control.

But he quite rightly points out that we never truly own our own domains: we rent them. And when it comes to our servers, most of us are renting those too.

It looks like print is a safer bet for long-term storage. Although when someone pointed out that print isn’t any guarantee of perpetuity either, Aaron responded:

Sure, print pieces can be destroyed, but important works can be preserved in places like the Beinecke

Ah, but there’s the crux—that adjective, “important”. Print’s asset—the fact that it is made of atoms, not bits—is also its weak point: there are only so many atoms to go around. And so we pick and choose what we save. Inevitably, we choose to save the works that we deem to be important.

The problem is that we can’t know today what the future value of a work will be. A future president of the United States is probably updating their Facebook page right now. The first person to set foot on Mars might be posting a picture to her Instagram feed at this very moment.

One of the reasons that I love the Internet Archive is that they don’t try to prioritise what to save—they save it all. That’s in stark contrast to many national archival schemes that only attempt to save websites from their own specific country. And because the Internet Archive isn’t a profit-driven enterprise, it doesn’t face the business realities that caused Google to back-pedal from its original mission. Or, as Andy Baio put it, never trust a corporation to do a library’s job.

But even the Internet Archive, wonderful as it is, suffers from the same issue that Aaron brought up with the domain name system—it’s centralised. As long as there is just one Internet Archive organisation, all of our preservation eggs are in one magnificent basket:

Should we be concerned that the technical expertise and infrastructure for doing this work is becoming consolidated in a single organization?

Which brings us back to Aaron’s original question. Perhaps it’s less about “What do we own?” and more about “What are we responsible for?” If we each take responsibility for our own words, our own photos, our own hopes, our own dreams, we might not be able guarantee that they’ll survive forever, but we can still try everything in our power to keep them online. Maybe by acknowledging that responsibility to preserve our own works, instead of looking for some third party to do it for us, we’re taking the most important first step.

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

There was a beautiful moment in Cory Doctorow’s closing keynote at last year’s dConstruct. It was an aside to his main argument but it struck like a hammer. Listen in at the 20 minute mark:

They’re the raw stuff of communication. Same for tweets, and Facebook posts, and the whole bit. And this is where some cynic usually says, “Pah! This is about preserving all that rubbish on Facebook? All that garbage on Twitter? All those pictures of cats?” This is the emblem of people who want to dismiss all the stuff that happens on the internet.

And I’m supposed to turn around and say “No, no, there’s noble things on the internet too. There’s people talking about surviving abuse, and people reporting police violence, and so on.” And all that stuff is important but I’m going to speak for the banal and the trivial here for a moment.

Because when my wife comes down in the morning—and I get up first; I get up at 5am; I’m an early riser—when my wife comes down in the morning and I ask her how she slept, it’s not because I want to know how she slept. I sleep next to my wife. I know how my wife slept. The reason I ask how my wife slept is because it is a social signal that says:

I see you. I care about you. I love you. I’m here.

And when someone says something big and meaningful like “I’ve got cancer” or “I won” or “I lost my job”, the reason those momentous moments have meaning is because they’ve been built up out of this humus of a million seemingly-insignificant transactions. And if someone else’s insignificant transactions seem banal to you, it’s because you’re not the audience for that transaction.

The medieval scribes of Ireland, out on the furthermost edges of Europe, worked to preserve the “important” works. But occasionally they would also note down their own marginalia like:

Pleasant is the glint of the sun today upon these margins, because it flickers so.

Short observations of life in fewer than 140 characters. Like this lovely example written in ogham, a morse-like system of encoding the western alphabet in lines and scratches. It reads simply “latheirt”, which translates to something along the lines of “massive hangover.”

I’m glad that those “unimportant” words have also been preserved.

Centuries later, the Irish poet Patrick Kavanagh would write about the desire to “wallow in the habitual, the banal”:

Wherever life pours ordinary plenty.

Isn’t that a beautiful description of the web?

Interstelling

Jessica and I entered the basement of The Dukes at Komedia last weekend to listen to Sarah and her band Spacedog provide live musical accompaniment to short sci-fi films from the end of the nineteenth and start of the twentieth centuries.

It was part of the Cine City festival, which is still going on here in Brighton—Spacedog will also be accompanying a performance of John Wyndham’s The Midwich Cuckoos, and there’s going to be a screening of François Truffaut’s brilliant film version of Ray Bradbury’s Fahrenheit 451 in the atmospheric surroundings of Brighton’s former reference library. I might try to get along to that, although there’s a good chance that I might cry at my favourite scene. Gets me every time.

Those 100-year old sci-fi shorts featured familiar themes—time travel, monsters, expeditions to space. I was reminded of a recent gathering in San Francisco with some of my nerdiest of nerdy friends, where we discussed which decade might qualify as the golden age of science fiction cinema. The 1980s certainly punched above their weight—1982 and 1985 were particularly good years—but I also said that I think we’re having a bit of a sci-fi cinematic golden age right now. This year alone we’ve had Edge Of Tomorrow, Guardians Of The Galaxy, and Interstellar.

Ah, Interstellar!

If you haven’t seen it yet, now would be a good time to stop reading. Imagine that I’ve written the word “spoilers” in all-caps, followed by many many line breaks before continuing.

Ten days before we watched Spacedog accompanying silent black and white movies in a tiny basement theatre, Jessica and I watched Interstellar on the largest screen we could get to. We were in Seattle, which meant we had the pleasure of experiencing the film projected in 70mm IMAX at the Pacific Science Center, right by the space needle.

I really, really liked it. Or, at least, I’ve now decided that I really, really liked it. I wasn’t sure when I first left the cinema. There were many things that bothered me, and those things battled against the many, many things that I really enjoyed. But having thought about it more—and, boy, does this film encourage thought and discussion—I’ve been able to resolve quite a few of the issues I was having with the film.

I hate to admit that most of my initial questions were on the science side of things. I wish I could’ve switched off that part of my brain.

There’s an apocryphal story about an actor asking “Where’s the light coming from?”, and being told “Same place as the music.” I distinctly remember thinking that very same question during Interstellar. The first planetfall of the film lands the actors and the audience on a world in orbit around a black hole. So where’s the light coming from?

The answer turns out to be that the light is coming from the accretion disk of that black hole.

But wouldn’t the radiation from the black hole instantly fry any puny humans that approach it? Wouldn’t the planet be ripped apart by the gravitational tides?

Not if it’s a rapidly-spinning supermassive black hole with a “gentle” singularity.

These are nit-picky questions that I wish I wasn’t thinking of. But I like the fact that there are answers to those questions. It’s just that I need to seek out those answers outside the context of the movie—I should probably read Kip Thorne’s book. The movie gives hints at resolving those questions—there’s just one mention of the gentle singularity—but it’s got other priorities: narrative, plot, emotion.

Still, I wish that Interstellar had managed to answer my questions while the film was still happening. This is something that Inception managed brilliantly: for all its twistiness, you always know exactly what’s going on, which is no mean feat. I’m hoping and expecting that Interstellar will reward repeated viewings. I’m certainly really looking forward to seeing it again.

In the meantime, I’ll content myself with re-watching Inception, which makes a fascinating companion piece to Interstellar. Both films deal with time and gravity as malleable, almost malevolent forces. But whereas Cobb travels as far inward as it is possible for a human to go, Coop travels as far outward as it is possible for our species to go.

Interstellar is kind of a mess. There’s plenty of sub-par dialogue and strange narrative choices. But I can readily forgive all that because of the sheer ambition and imagination on display. I’m not just talking about the imagination and ambition of the film-makers—I’m talking about the ambition and imagination of the human race.

That’s at the heart of the film, and it’s a message I can readily get behind.

Before we even get into space, we’re shown a future that, by any reasonable definition, would be considered a dystopia. The human race has been reduced to a small fraction of its former population, technological knowledge has been lost, and the planet is dying. And yet, where this would normally be the perfect storm required to show roving bands of road warriors pillaging their way across the dusty landscape, here we get an agrarian society with no hint of violence. The nightmare scenario is not that the human race is wiped out through savagery, but that the human race dies out through a lack of ambition and imagination.

Religion isn’t mentioned once in this future, but Interstellar does feature a deus ex machina in the shape of a wormhole that saves the day for the human race. I really like the fact that this deus ex machina isn’t something that’s revealed at the end of the movie—it’s revealed very early on. The whole plot turns out to be a glorious mash-up of two paradoxes: the bootstrap paradox and the twin paradox.

The end result feels like a mixture of two different works by Arthur C. Clarke: The Songs Of Distant Earth and 2001: A Space Odyssey.

2001 is the more obvious work to compare it to, and the film readily invites that comparison. Many reviewers have been quite to point out that Interstellar doesn’t reach the same heights as Kubrick’s 2001. That’s a fair point. But then again, I’m not sure that any film can ever reach the bar set by 2001. I honestly think it’s as close to perfect as any film has ever come.

But I think it’s worth pointing out that when 2001 was released, it was not greeted with universal critical acclaim. Quite the opposite. Many reviewers found it tedious, cold, and baffling. It divided opinion greatly …much like Interstellar is doing now.

In some ways, Interstellar offers a direct challenge to 2001—what if mankind’s uplifting is not caused by benevolent alien beings, but by the distant descendants of the human race?

This is revealed as a plot twist, but it was pretty clearly signposted from early in the film. So, not much of a plot twist then, right?

Well, maybe not. What if Coop’s hypothesis—that the wormhole is the creation of future humans—isn’t entirely correct? He isn’t the only one who crosses the event horizon. He is accompanied by the robot TARS. In the end, the human race is saved by the combination of Coop the human’s connection to his daughter, and the analysis carried out by TARS. Perhaps what we’re witnessing there is a glimpse of the true future for our species; human-machine collaboration. After all, if humanity is going to transcend into a fifth-dimensional species at some future point, it’s unlikely to happen through biology alone. But if you combine the best of the biological—a parent’s love for their child—with the best of technology, then perhaps our post-human future becomes not only plausible, but inevitable.

Deus ex machina.

Thinking about the future of the species in this co-operative way helps alleviate the uncomfortable feeling I had that Interstellar was promoting a kind of Manifest Destiny for the human race …although I’m not sure that I’m any more comfortable with that being replaced by a benevolent technological determinism.

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?

dConstruct 2014

dConstruct is all done for another year. Every year I feel sort of dazed in the few days after the conference—I spend so much time and energy preparing for this event looming in my future, that it always feels surreal when it’s suddenly in the past.

But this year I feel particularly dazed. A little numb. Slightly shellshocked even.

This year’s dConstruct was …heavy. Sure, there were some laughs (belly laughs, even) but overall it was a more serious event than previous years. The word that I heard the most from people afterwards was “important”. It was an important event.

Here’s the thing: if I’m going to organise a conference in 2014 and give it the theme of “Living With The Network”, and then invite the most thoughtful, informed, eloquent speakers I can think of …well, I knew it wasn’t going to be rainbows and unicorns.

If you were there, you know what I mean. If you weren’t there, it probably sounds like it wasn’t much fun. To be honest, “fun” wasn’t the highest thing on the agenda this year. But that feels right. And even though it wasn’t a laugh-fest, it was immensely enjoyable …if, like me, you enjoy having your brain slapped around.

I’m going to need some time to process and unpack everything that was squeezed into the day. Fortunately—thanks to Drew’s typical Herculean efforts—I can do that by listening to the audio, which is already available!

Slap the RSS feed in your generic MP3 listening device of choice and soak up the tsunami of thoughts, ideas, and provocations that the speakers delivered.

Oh boy, did the speakers ever deliver!

Warren Ellis at dConstruct Georgina Voss at dConstruct Clare Reddington at dConstruct Aaron Straup Cope at dConstruct Brian Suda at dConstruct Mandy Brown at dConstruct Anab Jain at dConstruct Tom Scott at dConstruct Cory Doctorow at dConstruct

Listen, it’s very nice that people come along to dConstruct each year and settle into the Brighton Dome to listen to these talks, but the harsh truth is that I didn’t choose the speakers for anyone else but myself. I know that’s very selfish, but it’s true. By lucky coincidence, the speakers I want to see turn out to deliver the best damn talks on the planet.

That said, as impressed as I was by the speakers, I was equally impressed by the audience. They were not spoon-fed. They had to contribute their time, attention, and grey matter to “get” those talks. And they did. For that, I am immensely grateful. Thank you.

I’m not going to go through all the talks one by one. I couldn’t do them justice. What was wonderful was to see the emerging themes, ideas, and references that crossed over from speaker to speaker: thoughts on history, responsibility, power, control, and the future.

And yes, there was definitely a grim undercurrent to some of those ideas about the future. But there was also hope. More than one speaker pointed out that the future is ours to write. And the emphasis on history highlighted that our present moment in time—and our future trajectory—is all part of an ongoing amazing collective narrative.

But it’s precisely because the future is ours to write that this year’s dConstruct hammered home our collective responsibility. This year’s dConstruct was a grown-up, necessarily serious event that shined a light on our current point in history …and maybe, just maybe, provided some potential paths for the future.

Seams

You can listen to an audio version of Seams.

“The function of science fiction,” said Ray Bradbury, “is not only to predict the future, but to prevent it.”

Dystopias are the default setting for science fiction. It’s rare to find utopian sci-fi, and when you do—as in the post-singularity Culture novels of Iain M.Banks—there’s always more than a germ of dystopia; the dystutopias that Margaret Atwood speaks of.

You’ve got your political dystopias—1984 and all its imitators. Then there’s alien invasion dystopias, machine-intelligence dystopias, and a whole slew of post-apocalyptic dystopias: nuclear war, pandemic disease, environmental collapse, genetic engineering …take your pick. From the cosy catastrophes of John Wyndham to Cormac McCarthy’s The Road, this is the stock and trade of speculative fiction.

Of all these undesirable futures, one that troubles more than any other is the Wall·E dystopia. I’m not talking about the environmental wasteland depicted on Earth. I mean the dystutopia depicted aboard the generation starship The Axiom. Here, humanity’s every need is catered to without requiring any thought. And so humanity atrophies, becoming physically obese and intellectually lazy.

It’s not a new idea. H. G. Wells had already shown us a distant future like this in his classic novel The Time Machine. In the far future of that book’s timeline, humanity splits into two. The savagery of the canabalistic Morlocks is contrasted with the docile passive stupidity of the Eloi, but as Jaron Lanier points out, both endpoints are equally horrific.

In Wall·E, the Eloi have advanced technology. Their technology has been designed according to a design principle enshrined in the title of a Dead Kennedys album: Give Me Convenience Or Give Me Death.

That’s the reason why the Wall·E dystopia disturbs me so much. It’s all-too believable. For many years now, the rallying cry of digital designers has been epitomised by the title of Steve Krug’s terrific book, Don’t Make Me Think. But what happens when that rallying cry is taken too far? What happens when it stops being “don’t make think while I’m trying to complete a task” to simply “don’t make me think” full stop?

Convenience. Ease of use. Seamlessness.

On the face of it, these all seem like desirable traits in digital and physical products alike. But they come at a price. When we design, we try to do the work so that the user doesn’t have to. We do the thinking so the user doesn’t have to. Don’t make the user think. But taken too far, that mindset becomes dangerous.

Marshall McLuhan said that every extension is also an amputution. As we augment the abilities of people to accomplish their tasks, we should be careful not to needlessly curtail what they can do:

Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by doing so they have a tendency to restrict our options for accomplishing that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

I see this a lot in the world of web devlopment. We’re constantly faced with challenges like dealing with users on slow networks or small screens. So we try to come up with solutions (bandwidth media queries, responsive images) that have at their heart an assumption that we know better than the end user what they should get.

I’m not saying that everything should be an option in a menu for the user to figure out—picking smart defaults is very much part of our job. But I do think there’s real value in giving the user the final choice.

I remember Jake giving a good example of this. If he’s travelling and he’s on a 3G network on his phone, or using shitty hotel WiFi on his laptop, and someone sends him a link to a video of some cats, he doesn’t mind if he gets the low-quality version as long as he gets to see the feline shenanigans in short order. But if he’s in the same situation and someone sends him a link to the just-released trailer for the new Star Trek movie, he’s willing to wait for hours so that he can watch in high-definition.

That’s a choice. All too often, these kind of choices are pre-made by designers and developers instead of being offered to the end user. We probably mean well, but there’s a real danger in assuming that just because someone is using a particular device that we can infer what their context is:

Mind reading is no way to base fundamental content decisions.

My point is that while we don’t want to overwhelm the user with choice overload, we also need to be careful not to unintentionally remove valuable choices that can empower people. In our quest to make experiences seamless, we run the risk of also making those experiences rigid and inflexible.

The drive for a “seamless experience” has been used to justify some harsh amputations. When Twitter declared war on the very developers it used to champion, and changed its API and terms of service so that tweets had to be displayed the same way everywhere, it was done in the name of “a consistent user experience.” Twitter knows best.

The web is made up of parts and there are seams between those parts: HTML, HTTP, and URLs. The software that can expose or hide those seams is the web browser. Web browsers are made by human beings and it’s the mindset and assumptions of those human beings that determines whether web browsers are enabling or disabling users to make use of those seams.

“View source” is a seam that exposes the HTML lying beneath every web page. That kind of X-ray vision can be quite powerful. Clearly it’s not an important feature for most users, but it is directly responsible for showing people how web pages are made …and intimating that anyone can do it. In the introduction to my first book I thanked “view source” along with my other teachers like Jeff Veen, Steve Champeon, and Jeffrey Zeldman.

These days, browsers don’t like to expose “view source” as easily as they once did. It’s hidden amongst the developer tools. There’s an assumption there that it’s not intended for regular users. The browser makers know best.

There are seams between the technologies that make up a web page: HTML, CSS, and JavaScript. The ability to enable or disable those layers can be empowering. It has become harder and harder to disable JavaScript in the browser. Another little amputation. The browser makers know best.

The CSS that styles web pages can be over-ridden by the end user. This is not a bug. It is a very powerful feature. That feature is being removed:

I understand that vendors can do whatever they want to control how you experience the web, because it is their software, their product, but removing user stylesheets feels sooo un-web to me, which is irony. A browser’s largest responsibility is to give people access to the web. It’s like the web is this open hand, but software is this closed fist.

Then there’s the URL. The ultimate seam.

Historically, browsers have exposed this seam, but now—just as with “view source” and user stylesheets—the visibility of the URL is being relegated to being a power-user tool.

The ultimate amputation.

The irony here is that the justification for this change is not the usual mantra of providing “a more seamless user experience.” Instead, the justification is supposedly security.

This strike me as really strange. Security is the one area where seamlessness is definitely not a desirable characteristic. A secure system requires people to be mindful and aware of their situation. This is certainly true on the web, as Tom points out:

Hiding information away makes me less able to make decisions: it makes me a less informed user.

The whole reason that phishing is a problem is because users don’t pay any bloody attention to what they see in their location bar. Putting less information in the location bar makes the location bar less useful and thus there’s less point paying any attention to it.

Tom has hit on the fundamental mismatch here. Chrome is a piece of software that wants to provide a good user experience—“don’t make me think!”—while at the same trying to make users mindful of their surroundings:

Security requires educated, pro-active, informed thinking users.

Usability is about making the whole process of using the web seamless and thoughtless: a child should be able to do it.

So from the security standpoint, obfuscating the URL is exactly the wrong thing to do.

In order to actually stay safe online, you need to see the “seams” of the web, you need to pay attention, use your brain.

Chrome knows best.

Making it harder to “view source” might seem like an inconsequentail decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Make no mistake, all software is political. We talk about opinionated software but really, all software is opinionated, whether we like it or not. Seemingly inconsequential interface decisions are actually reflections of assumptions, biases and beliefs.

As Nat points out, like all political decisions, this is about power:

There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

If that’s the case, then it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

Welcome aboard The Axiom.

Buy n Large knows best.

The mind-blowing awesomeness of dConstruct 2012

Where do I start?

I could start by saying that dConstruct 2012 was one of the best days of my life. But let me back up a bit…

Here’s what I did last week:

  • Sunday, September 2nd: The amazing PixelPyros at Jubilee Square with Seb, followed by The Geekest Link pub quiz at The Caroline of Brunswick.
  • Monday, September 3rd to Wednesday, September 5th: non-stop Reasons To Be Creative.
  • Thursday, September 6th: Improving Reality with the brilliant Warren Ellis followed by Brighton SF, which exceeded my wildest expectations.
  • Friday, September 7th: dConstruct. Indescribably brilliant.
  • Saturday, September 8th: Mini Maker Faire, a fantastic collection of hackers and hardware in one place.
  • Sunday, September 9th: IndieWebCamp UK round at The Skiff with some of the smartest people I know.

That was just one week in the Brighton Digital Festival! And the weather was perfect the whole time—glorious sunshine.

I was really nervous on the day of Brighton SF. Like I said, I had no idea what I was doing. But I began to calm down right before the event.

I was sitting outside with Christopher Priest (I told him how much I liked Inverted World) and Joanne McNeil when the Brighton SF authors showed up, met one another, and started chatting. That’s when I knew everything was going to be fine.

Jeff Noon. Lauren Beukes. Brian Aldiss. Three giants of science fiction. Three warm, friendly, and charming people.

The event was so good. Each of the authors were magnificently charismatic and captivating, the readings were absolutely enthralling, and I end up thoroughly enjoying myself.

Thank you for sending in questions for the authors. On the night, things were going so smoothly and time was flying by so fast, I actually didn’t get a chance to ask them …sorry.

It was a wonderful event and Drew very graciously agreed to record the audio so there’s going to be a podcast and a transcript available very soon. Watch this space.

When the day of dConstruct dawned, I was already in a good mood from Brighton SF. But nothing could have prepared me for what was to come.

I had the great honour and pleasure of introducing an amazing line-up of speakers. Seriously, every single speaker was absolutely superb. It was all killer, no filler.

Ben’s keynote set the scene perfectly. And boy, what a trooper! He really wasn’t a well chap, but with classic English stoicism and moustachioed stiff upper lip, he delivered the perfect opening for a day of playing with the future.

From there, it was just a non-stop delivery of brilliance from each speaker. After each talk, I kept using the words “awesome” and “mind-blowing”, but y’know what? They were awesome and mind-blowing!

Ben Hammersley Jenn Lukas Scott Jenson Ariel Waldman Seb Lee-Delisle Lauren Beukes Jason Scott Tom Armitage

And at the end …James Burke.

(this is the point at which I really needed to study the dreams/reality diagram because I was beginning to lose my grip on what was real)

James Burke

What can I say? I was really hoping it would be as good as an episode of Connections but what I got was like an entire season of Connections condensed into 45 minutes of brain-bending rapid-fire brilliance. It was mind-blowing. It was awesome. It broke my brain in the best possible way.

When James finished and the day was done, I was quite overcome. I was just so …happy! I had the privilege of hosting the smartest, most entertaining people I know. And I’m not just talking about the speakers.

At the after-party—and on Twitter—attendees told me just how much they enjoyed dConstruct 2012. I felt very happy, very proud, and kind of vindicated—it was something of a risky line-up and tickets were selling slower than in previous years, but boy, oh boy, that line-up really delivered the goods on the day.

Here’s one write-up of dConstruct. If you were there, I’d really appreciate it if you wrote down what you thought of the event. Drop me a line and point me to your blog post.

If you weren’t there …my commiserations. But here’s something that might serve as some consolation:

Thanks to Drew’s tireless work through the weekend, the audio from Friday’s conference is already online! Browse through the talks on the dConstruct archive or subscribe to a podcast of the talks on Huffduffer.

But you really had to be there.

Admiral Shovel and the Toilet Roll on Huffduffer

Detection

When I wrote about responsible responsive images a few months back, I outlined my two golden rules when evaluating the various techniques out there:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

I also described why that led to my dissatisfaction with most server-side device libraries for user-agent sniffing:

When you consider the way that I’m approaching responsive images, those libraries are over-engineered. They contain a massive list of mobile user-agent strings that I’ll never need. Remember, I’m taking a mobile-first approach and assuming a mobile browser by default. So if I’m going to overturn that assumption, all I need is a list of desktop user-agent strings.

I finished by asking:

Anybody fancy putting it together?

Well, it turns out that Brett Jankord is doing just that with a device-detection script called Categorizr:

Instead of assuming the device is a desktop, and detecting mobile and tablet device user agents, Categorizr is a mobile first based device detection. It assumes the device is mobile and sets up checks to see if it’s a desktop or tablet. Desktops are fairly easy to detect, the user agents are known, and are not changing anytime soon.

It isn’t ready for public consumption yet and there are plenty of known issues to iron out first, but I think the fundamental approach is spot-on:

By assuming devices are mobile from the beginning, Categorizr aims to be more future friendly. When new phones come out, you don’t need to worry if their new user agent is in your device detection script since devices are assumed mobile from the start.

Responsible responsive images

I’m in Belfast right now for this year’s Build conference, so I am. I spent yesterday leading a workshop on —the marriage of responsive design with progressive enhancement; a content-first approach to web design.

I spent a chunk of time in the afternoon going over the thorny challenges of responsive images. Jason has been doing a great job of rounding up all the options available to you when it comes to implementing responsive images:

  1. Responsive IMGs, Part 1,
  2. Responsive IMGs, Part 2—an in-depth look at techniques,
  3. Responsive IMGs, Part 3—the future of the img element.

Personally, I have two golden rules in mind when it comes to choosing a responsive image technique for a particular project:

  1. The small image should be default.
  2. Don’t load images twice (in other words, don’t load the small images and the larger images).

That first guideline simply stems from the mobile-first approach: instead of thinking of the desktop experience as the default, I’m assuming that people are using small screen, narrow bandwidth devices until proven otherwise.

Assuming a small-screen device by default, the problem is now how to swap out the small images for larger images on wider viewports …without downloading both images.

I like Mark’s simplified version of Scott’s original responsive image technique and I also like Andy’s contextual responsive images technique. They all share a common starting point: setting a cookie with JavaScript before any images have started loading. Then the cookie can be read on the server side to send the appropriate image (and remember, because the default is to assume a smaller screen, if JavaScript isn’t available the browser is given the safer fallback of small images).

Yoav Weiss has been doing some research into preloaders, cookies and race conditions in browsers and found out that in some situations, it’s possible that images will begin to download before the JavaScript in the head of the document has a chance to set the cookie. This means that in some cases, on first visiting a page, desktop browsers like IE9 might begin get the small images instead of the larger images, thereby violating the second rule (though, again, mobile browsers will always get the smaller images, never the larger images).

Yoav concludes:

Different browsers act differently with regard to which resources they download before/after the head scripts are done loading and running. Furthermore, that behavior is not defined in any spec, and may change with every new release. We cannot and should not count on it.

The solution seems clear: we need to standardise on browser download behaviour …which is exactly what the HTML standard is doing (along with standardising error handling).

That’s why I was surprised by Jason’s conclusion that device detection is the future-friendly img option.

Don’t get me wrong: using a service like Sencha.io SRC (formerly TinySRC)—which relies on user-agent sniffing and a device library lookup—is a perfectly reasonable solution for responsive images …for now. But I wouldn’t call it future friendly; quite the opposite. If anything, it might be the most present-friendly technique.

One issue with relying on user-agent sniffing is the danger of false positives: a tablet may get incorrectly identified as a mobile phone, a mobile browser may get incorrectly identified as a desktop browser and so on. But those are edge cases and they’re actually few and far between …for now.

The bigger issue with relying on user-agent sniffing is that you are then entering into an arms race. You can’t just plug in a device library and forget about it. The library must be constantly maintained and kept up to date. Given the almost-exponential expansion of the device and browser landscape, that’s going to get harder and harder.

Disruption will only accelerate. The quantity and diversity of connected devices—many of which we haven’t imagined yet—will explode, as will the quantity and diversity of the people around the world who use them. Our existing standards, workflows, and infrastructure won’t hold up. Today’s onslaught of devices is already pushing them to the breaking point. They can’t withstand what’s ahead.

So while I consider user-agent sniffing to be an acceptable short-term solution, I don’t think it can scale to the future onslaught—not to mention the tricky issue of the licensing landscape around device libraries.

There’s another reason why I tend to steer clear of device libraries like WURFL and Device Atlas. When you consider the way that I’m approaching responsive images, those libraries are over-engineered. They contain a massive list of mobile user-agent strings that I’ll never need. Remember, I’m taking a mobile-first approach and assuming a mobile browser by default. So if I’m going to overturn that assumption, all I need is a list of desktop user-agent strings. That’s a much less ambitious undertaking. Such a library wouldn’t need to kept updated quite as often as a mobile device listing.

Anybody fancy putting it together?

Ending September

September was quite a month. There were plenty of events that I attended right here in Brighton:

In the middle of all that, I went to Tennessee for Breaking Development and Mobilewood.

I finished the month with a trip to Italy for the inaugural From The Front conference. It was a great little grassroots affair. It was basically a free event—there was an ostensible cover charge of ten euros just to ensure that people didn’t sign up without showing up. That’s why I waived my usual speaking fee (as an aside, if you’re a conference organiser and you’re thinking about asking me to speak for free at an event that charges hundreds of dollars/pounds/euros to attendees …don’t).

I have to admit that the location of the event did make a difference. I jumped at the chance to return to Bologna. Jessica and I even managed to squeeze in a trip down to Florence. Pictures were taken.

The evening before travelling to Italy, before I packed my bag I had a chat with Jen for her podcast, The Web Ahead.

5by5 | The Web Ahead #3: Jeremy Keith on Everything Web on Huffduffer

We talked about a lot of stuff from the nitty-gritty of responsive web design workflows and processes to being future friendly in the face of the mobile browser landscape. We also discussed long-term digital preservation and the web’s role as a storage medium for our collective culture. It sounds like a random grab-bag of topics, but in my mind all of this is connected.

I somehow managed to avoid even once mentioning a space elevator.