Journal tags: future

35

sparkline

Future Sync 2020

I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.

The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.

The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).

As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.

Here are those questions and answers…

The first few questions were about last years’s CERN project, which opens the talk:

Based on what you now know from the CERN 2019 WorldWideWeb Rebuild project—what would you have done differently if you had been part of the original 1989 Team?

I responded:

Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.

The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:

https://adactio.com/journal/16099

Next question:

How excited were you when you initially got the call for such an amazing project?

My predictable response:

It was an unbelievable privilege! I was so excited the whole time—I still can hardly believe it really happened!

https://adactio.com/journal/14803

https://adactio.com/journal/14821

Later in the presentation, I talked about service workers and progressive web apps. I got a technical question about that:

Is there a limit to the amount of local storage a PWA can use?

I answered:

Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.

But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.

More worrying is the announcement from Apple to only store files for a week of browser use:

https://adactio.com/journal/16619

Finally, there was a question about the over-arching theme of the talk…

Great talk, Jeremy. Do you encounter push-back when using the term “Progressive Enhancement”?

My response:

Yes! …And that’s why I never once used the phrase “progressive enhancement” in my talk. 🙂

There’s a lot of misunderstanding of the term. Rather than correct it, I now avoid it:

https://adactio.com/journal/9195

Instead of using the phrase “progressive enhancement”, I now talk about the benefits and effects of the technique: resilience, universality, etc.

Future Sync Distributed 2020

Living Through The Future

You can listen to audio version of Living Through The Future.

Usually when we talk about “living in the future”, it’s something to do with technology: smartphones, satellites, jet packs… But I’ve never felt more like I’m living in the future than during The Situation.

On the one hand, there’s nothing particularly futuristic about living through a pandemic. They’ve occurred throughout history and this one could’ve happened at any time. We just happen to have drawn the short straw in 2020. Really, this should feel like living in the past: an outbreak of a disease that disrupts everyone’s daily life? Nothing new about that.

But there’s something dizzyingly disconcerting about the dominance of technology. This is the internet’s time to shine. Think you’re going crazy now? Imagine what it would’ve been like before we had our network-connected devices to keep us company. We can use our screens to get instant updates about technologies of world-shaping importance …like beds and face masks. At the same time as we’re starting to worry about getting hold of fresh vegetables, we can still make sure that whatever meals we end up making, we can share them instantaneously with the entire planet. I think that, despite William Gibson’s famous invocation, I always figured that the future would feel pretty futuristic all ‘round—not lumpy with old school matters rubbing shoulders with technology so advanced that it’s indistinguishable from magic.

When I talk about feeling like I’m living in the future, I guess what I mean is that I feel like I’m living at a time that will become History with a capital H. I start to wonder what we’ll settle on calling this time period. The Covid Point? The Corona Pause? 2020-P?

At some point we settled on “9/11” for the attacks of September 11th, 2001 (being a fan of ISO-8601, I would’ve preferred 2001-09-11, but I’ll concede that it’s a bit of a mouthful). That was another event that, even at the time, clearly felt like part of History with a capital H. People immediately gravitated to using historical comparisons. In the USA, the comparison was Pearl Harbour. Outside of the USA, the comparison was the Cuban missile crisis.

Another comparison between 2001-09-11 and what we’re currently experiencing now is how our points of reference come from fiction. Multiple eyewitnesses in New York described the September 11th attacks as being “like something out of a movie.” For years afterwards, the climactic showdowns in superhero movies that demolished skyscrapers no longer felt like pure escapism.

For The Situation, there’s no shortage of prior art to draw upon for comparison. If anthing, our points of reference should be tales of isolation like Robinson Crusoe. The mundane everyday tedium of The Situation can’t really stand up to comparison with the epic scale of science-fictional scenarios, but that’s our natural inclination. You can go straight to plague novels like Stephen King’s The Stand or Emily St. John Mandel’s Station Eleven. Or you can get really grim and cite Cormac McCarthy’s The Road. But you can go the other direction too and compare The Situation with the cozy catastrophes of John Wyndham like Day Of The Triffids (or just be lazy and compare it to any of the multitude of zombie apocalypses—an entirely separate kind of viral dystopia).

In years to come there will be novels set during The Situation. Technically they will be literary fiction—or even historical fiction—but they’ll feel like science fiction.

I remember the Chernobyl disaster having the same feeling. It was really happening, it was on the news, but it felt like scene-setting for a near-future dystopian apocalypse. Years later, I was struck when reading Wolves Eat Dogs by Martin Cruz-Smith. In 2006, I wrote:

Halfway through reading the book, I figured out what it was: Wolves Eat Dogs is a Cyberpunk novel. It happens to be set in present-day reality but the plot reads like a science-fiction story. For the most part, the book is set in the post-apocolyptic landscape of Prypiat, near Chernobyl. This post-apocolyptic scenario just happens to be real.

The protagonist, Arkady Renko, is sent to this frightening hellish place following a somewhat far-fetched murder in Moscow. Killing someone with a minute dose of a highly radioactive material just didn’t seem like a very realistic assassination to me.

Then I saw the news about Alexander Litvinenko, the former Russian spy who died this week, quite probably murdered with a dose of polonium-210.

I’ve got the same tingling feeling about The Situation. Fact and fiction are blurring together. Past, present, and future aren’t so easy to differentiate.

I really felt it last week standing in the back garden, looking up at the International Space Station passing overhead on a beautifully clear and crisp evening. I try to go out and see the ISS whenever its flight path intersects with southern England. Usually I’d look up and try to imagine what life must be like for the astronauts and cosmonauts on board, confined to that habitat with nowhere to go. Now I look up and feel a certain kinship. We’re all experiencing a little dose of what that kind of isolation must feel like. Though, as the always-excellent Marina Koren points out:

The more experts I spoke with for this story, the clearer it became that, actually, we have it worse than the astronauts. Spending months cooped up on the ISS is a childhood dream come true. Self-isolating for an indefinite period of time because of a fast-spreading disease is a nightmare.

Whenever I look up at the ISS passing overhead I feel a great sense of perspective. “Look what we can do!”, I think to myself. “There are people living in space!”

Last week that feeling was still there but it was tempered with humility. Yes, we can put people in space, but here we are with our entire way of life put on pause by something so small and simple that it’s technically not even a form of life. It’s like we’re the martians in H.G. Wells’s War Of The Worlds; all-conquering and formidable, but brought low by a dose of dramatic irony, a Virus Ex Machina.

Mirrorworld

Over on the Failed Architecture site, there’s a piece about Kevin Lynch’s 1960 book The Image Of The City. It’s kind of fun to look back at a work like that, from today’s vantage point of ubiquitous GPS and smartphones with maps that bestow God-like wayfinding. How much did Lynch—or any other futurist from the past—get right about our present?

Quite a bit, as it turns out.

Lynch invented the term ‘imageability’ to describe the degree to which the urban environment can be perceived as a clear and coherent mental image. Reshaping the city is one way to increase imageability. But what if the cognitive map were complemented by some external device? Lynch proposed that this too could strengthen the mental image and effectively support navigation.

Past visions of the future can be a lot of fun. Matt Novak’s Paleofuture blog is testament to that. Present visions of the future are rarely as enjoyable. But every so often, one comes along…

Kevin Kelly has a new piece in Wired magazine about Augmented Reality. He suggests we don’t call it AR. Sounds good to me. Instead, he proposes we use David Gelernter’s term “the mirrorworld”.

I like it! I feel like the term won’t age well, but that’s not the point. The term “cyberspace” hasn’t aged well either—it sounds positively retro now—but Gibson’s term served its purpose in prompting discussing and spurring excitement. I feel like Kelly’s “mirrorworld” could do the same.

Incidentally, the mirrorworld has already made an appearance in the William Gibson book Spook Country in the form of locative art:

Locative art, a melding of global positioning technology to virtual reality, is the new wrinkle in Gibson’s matrix. One locative artist, for example, plants a virtual image of F. Scott Fitzgerald dying at the very spot where, in fact, he had his Hollywood heart attack, and does the same for River Phoenix and his fatal overdose.

Yup, that sounds like the mirrorworld:

Time is a dimension in the mirror­world that can be adjusted. Unlike the real world, but very much like the world of software apps, you will be able to scroll back.

Now look, normally I’m wary to the point of cynicism when it comes to breathless evocations of fantastical futures extropolated from a barely functioning technology of today, but damn, if Kevin Kelly’s enthusiasm isn’t infectious! He invokes Borges. He acknowledges the challenges. But mostly he pumps up the excitement by baldly stating possible outcomes as though they are inevitabilities:

We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.

When he really gets going, we enter into some next-level science-fictional domains:

The mirrorworld will be a world governed by light rays zipping around, coming into cameras, leaving displays, entering eyes, a never-­ending stream of photons painting forms that we walk through and visible ghosts that we touch. The laws of light will govern what is possible.

And then we get sentences like this:

History will be a verb.

I kind of love it. I mean, I’m sure we’ll look back on it one day and laugh, shaking our heads at its naivety, but for right now, it’s kind of refreshing to read something so unabashedly hopeful and so wildly optimistic.

2001 + 50

The first ten minutes of my talk at An Event Apart Seattle consisted of me geeking about science fiction. There was a point to it …I think. But I must admit it felt quite self-indulgent to ramble to a captive audience about some of my favourite works of speculative fiction.

The meta-narrative I was driving at was around the perils of prediction (and how that’s not really what science fiction is about). This is something that Arthur C. Clarke pointed out repeatedly, most famously in Hazards of Prophecy. Ironically, I used Clarke’s meisterwork of a collaboration with Stanley Kubrick as a rare example of a predictive piece of sci-fi with a good hit rate.

When I introduced 2001: A Space Odyssey in my talk, I mentioned that it was fifty years old (making it even more of a staggering achievement, considering that humans hadn’t even reached the moon at that point). What I didn’t realise at the time was that it was fifty years old to the day. The film was released in American cinemas on April 2nd, 1968; I was giving my talk on April 2nd, 2018.

Over on Wired.com, Stephen Wolfram has written about his own personal relationship with the film. It’s a wide-ranging piece, covering everything from the typography of 2001 (see also: Typeset In The Future) right through to the nature of intelligence and our place in the universe.

When it comes to the technology depicted on-screen, he makes the same point that I was driving at in my talk—that, despite some successful extrapolations, certain real-world advances were not only unpredicted, but perhaps unpredictable. The mobile phone; the collapse of the soviet union …these are real-world events that are conspicuous by their absence in other great works of sci-fi like William Gibson’s brilliant Neuromancer.

But in his Wired piece, Wolfram also points out some acts of prediction that were so accurate that we don’t even notice them.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # [“octothorp”]). Push-button phones actually already existed in 1968, although they were not yet widely deployed.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s.

I’ve watched 2001 many, many, many times and I’m always looking out for details of the world-building …but it never occurred to me that push-button numeric keypads or credit cards were examples of predictive extrapolation. As time goes on, more and more of these little touches will become unnoticeable and unremarkable.

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons.

Now I want to watch 2001: A Space Odyssey again. If I’m really lucky, I might get to see a 70mm print in a cinema near me this year.

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

The Rational Optimist

As part of my ongoing obsession with figuring out how we evaluate technology, I finally got around to reading Matt Ridley’s The Rational Optimist. It was an exasperating read.

On the one hand, it’s a history of the progress of human civilisation. Like Steven Pinker’s The Better Angels Of Our Nature, it piles on the data demonstrating the upward trend in peace, wealth, and health. I know that’s counterintuitive, and it seems to fly in the face of what we read in the news every day. Mind you, The New York Times took some time out recently to acknowledge the trend.

Ridley’s thesis—and it’s a compelling one—is that cooperation and trade are the drivers of progress. As I read through his historical accounts of the benefits of open borders and the cautionary tales of small-minded insular empires that collapsed, I remember thinking, “Boy, he must be pretty upset about Brexit—his own country choosing to turn its back on trade agreements with its neighbours so that it could became a small, petty island chasing the phantom of self-sufficiency”. (Self-sufficiency, or subsistence living, as Ridley rightly argues throughout the book, correlates directly with poverty.)

But throughout these accounts, there are constant needling asides pointing to the perceived enemies of trade and progress: bureaucrats and governments, with their pesky taxes and rule of law. As the accounts enter the twentieth century, the gloves come off completely revealing a pair of dyed-in-the-wool libertarian fists that Ridley uses to pummel any nuance or balance. “Ah,” I thought, “if he cares more about the perceived evils of regulation than the proven benefits of trade, maybe he might actually think Brexit is a good idea after all.”

It was an interesting moment. Given the conflicting arguments in his book, I could imagine him equally well being an impassioned remainer as a vocal leaver. I decided to collapse this probability wave with a quick Google search, and sure enough …he’s strongly in favour of Brexit.

In theory, an author’s political views shouldn’t make any difference to a book about technology and progress. In practice, they barge into the narrative like boorish gatecrashers threatening to derail it entirely. The irony is that while Ridley is trying to make the case for rational optimism, his own personal political feelings are interspersed like a dusting of irrationality, undoing his own well-researched case.

It’s not just the argument that suffers. Those are the moments when the writing starts to get frothy, if not downright unhinged. There were a number of confusing and ugly sentences that pulled me out of the narrative and made me wonder where the editor was that day.

The last time I remember reading passages of such poor writing in a non-fiction book was Nassim Nicholas Taleb’s The Black Swan. In the foreword, Taleb provides a textbook example of the Dunning-Kruger effect by proudly boasting that he does not need an editor.

But there was another reason why I thought of The Black Swan while reading The Rational Optimist.

While Ridley’s anti-government feelings might have damaged his claim to rationality, surely his optimism is unassailable? Take, for example, his conclusions on climate change. He doesn’t (quite) deny that climate change is real, but argues persuasively that it won’t be so bad. After all, just look at the history of false pessimism that litters the twentieth century: acid rain, overpopulation, the Y2K bug. Those turned out okay, therefore climate change will be the same.

It’s here that Ridley succumbs to the trap that Taleb wrote about in his book: using past events to make predictions about inherently unpredictable future events. Taleb was talking about economics—warning of the pitfalls of treating economic data as though it followed a bell-curve curve, when it fact it’s a power-law distribution.

Fine. That’s simply a logical fallacy, easily overlooked. But where Ridley really lets himself down is in the subsequent defence of fossil fuels. Or rather, in his attack on other sources of energy.

When recounting the mistakes of the naysayers of old, he points out that their fundamental mistake is to assume stasis. Hence their dire predictions of war, poverty, and famine. Ehrlich’s overpopulation scare, for example, didn’t account for the world-changing work of Borlaug’s green revolution (and Ridley rightly singles out Norman Borlaug for praise—possibly the single most important human being in history).

Yet when it comes to alternative sources of energy, they are treated as though they are set in stone, incapable of change. Wind and solar power are dismissed as too costly and inefficient. The Rational Optimist was written in 2008. Eight years ago, solar energy must have indeed looked like a costly investment. But things have changed in the meantime.

As Matt Ridley himself writes:

It is a common trick to forecast the future on the assumption of no technological change, and find it dire. This is not wrong. The future would indeed be dire if invention and discovery ceased.

And yet he fails to apply this thinking when comparing energy sources. If anything, his defence of fossil fuels feels grounded in a sense of resigned acceptance; a sense of …pessimism.

Matt Ridley rejects any hope of innovation from new ideas in the arena of energy production. I hope that he might take his own words to heart:

By far the most dangerous, and indeed unsustainable thing the human race could do to itself would be to turn off the innovation tap. Not inventing, and not adopting new ideas, can itself be both dangerous and immoral.

A wager on the web

Jason has written a great post about progressive web apps. It’s also a post about whether fears of the death of the web are justified.

Lately, I vacillate on whether the web is endangered or poised for a massive growth due to the web’s new capabilities. Frankly, I think there are indicators both ways.

So he applies Pascal’s wager. The hypothesis is that the web is under threat and progressive web apps are a solution to fighting that threat.

  • If the hypothesis is incorrect and we don’t build progressive web apps, things continue as they are on the web (which is not great for users—they have to continue to put up with fragile, frustratingly slow sites).
  • If the hypothesis is incorrect and we do build progressive web apps, users get better websites.
  • If the hypothesis is correct and we do build progressive web apps, users get better websites and we save the web.
  • If the hypothesis is correct and we don’t build progressive web apps, the web ends up pining for the fjords.

Whether you see the web as threatened or see Chicken Little in people’s fears and whether you like progressive web apps or feel it is a stupid Google marketing thing, we can all agree that putting energy into improving the experience for the people using our sites is always a good thing.

Jason is absolutely correct. There are literally no downsides to us creating progressive web apps. Everybody wins.

But that isn’t the question that people have been tackling lately. None of these (excellent) blog posts disagree with the conclusion that building progressive web apps as originally defined would be a great move forward for the web:

The real question that comes out of those posts is whether it’s good or bad for the future of progressive web apps—and by extension, the web—to build stop-gap solutions that use some progressive web app technologies (Service Workers, for example) while failing to be progressive in other ways (only working on mobile devices, for example).

In this case, there are two competing hypotheses:

  1. In the short term, it’s okay to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because over time they’ll get improved and we’ll end up with proper progressive web apps in the long term.
  2. In the short term, we should build proper progressive web apps, and it’s a really bad idea to build so-called progressive web apps that have a fragile technology stack or only work on specific devices, because that encourages more people to build sub-par websites and progressive web apps become synonymous with door-slamming single-page apps in the long term.

The second hypothesis sounds pessimistic, and the first sounds optimistic. But the people arguing for the first hypothesis aren’t coming from a position of optimism. Take Christian’s post, for example, which I fundamentally disagree with:

End users deserve to have an amazing, form-factor specific experience. Let’s build those.

I think end users deserve to have an amazing experience regardless of the form-factor of their devices. Christian’s viewpoint—like Alex’s tweetstorm—is rooted in the hypothesis that the web is under threat and in danger. The conclusion that comes out of that—building mobile-only JavaScript-reliant progressive web apps is okay—is a conclusion reached through fear.

Never make any decision based on fear.

dConstruct 2015 podcast: Nick Foster

dConstruct 2015 is just ten days away. Time to draw the pre-conference podcast to a close and prepare for the main event. And yes, all the talks will be recorded and released in podcast form—just as with the previous ten dConstructs.

The honour of the final teaser falls to Nick Foster. We had a lovely chat about product design, design fiction, Google, Nokia, Silicon Valley and Derbyshire.

I hope you’ve enjoyed listening to these eight episodes. I had certainly had a blast recording them. They’ve really whetted my appetite for dConstruct 2015—I think it’s going to be a magnificent day.

With the days until the main event about to tick over into single digits, this is your last chance to grab a ticket if you haven’t already got one. And remember, as a loyal podcast listener, you can use the discount code ‘ansible’ to get 10% off.

See you in the future …next Friday!

dConstruct 2015 podcast: Brian David Johnson

The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.

Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.

dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.

Grab your ticket today and use the code ‘ansible’ to take advantage of the 10% discount for podcast listeners.

dConstruct 2015 podcast: John Willshire

The latest dConstruct 2015 podcast episode is ready for your aural pleasure. This one’s a bit different. John Willshire came down to Brighton so that we could have our podcast chat face-to-face instead of over Skype.

It was fascinating to see the preparation that John is putting into his talk. He had labelled cards strewn across the table, each one containing a strand that he wants to try to weave into his talk. They also made for great conversation starters. That’s how we ended up talking about Interstellar and Man Of Steel, and the differing parenting styles contained therein. I don’t think I’ll ever be able to rid myself of the mental image of a giant holographic head of Michael Caine dispensing words of wisdom to in the Fortress Of Solitude. “Rage, rage against the dying of the light, Kal-el!”

The sound quality of this episode is more “atmospheric”, given the recording conditions (you can hear Clearlefties and seagulls in the background) but a splendid time was had by both John and myself. I hope that you enjoy listening to it.

I have a feeling that after listening to this, you’re definitely going to want to see John’s dConstruct talk, so grab yourself a ticket, using the discount code ‘ansible’ to get 10% off.

dConstruct 2015 podcast: Josh Clark

On Monday, I launched a new little experiment—a podcast series of interviews with the lovely people who will be speaking at this year’s dConstruct. I’m very much looking forward to the event (it presses all my future-geekery buttons) and talking to the speakers ahead of time is just getting me even more excited.

I’m releasing the second episode of the podcast today. It’s a chat with the thoroughly charming Josh Clark. We discuss technology, magic, Harry Potter, and the internet of things.

If you want to have this and future episodes delivered straight to your earholes, subscribe to the podcast feed.

And don’t forget: as a loyal podcast listener, you get 10% off the ticket price of dConstruct. Use the discount code “ansible”. You’re welcome.

Podcasting the future

I’m very proud of the three dConstructs I put together: 2012, 2013, and 2014, but I don’t have the fortitude to do it indefinitely so I’m stepping back from the organisational duties this year. So dConstruct 2015 is in Andy’s hands.

Of course he’s only gone and organised exactly the kind of conference that I’d feed my own grandmother to the ravenous bugblatter beast of Traal to attend. I mean, the theme is Designing The Future, for crying out loud!

To say I’m looking forward to hearing what all those great speakers have to say is something of an understatement. In fact, I couldn’t wait until September. I’ve started pestering them already.

On the off-chance that other people might be interesting in hearing me prod, cajole, and generally geek out about technology, sci-fi, and futurism, I’m taking the liberty of recording our conversations.

That’s right: there’s a podcast.

The episodes will be about half an hour so in length, sometimes longer, sometimes shorter. There’s no set format or agenda. It’s all very free-form, which is a polite way of saying that I’m completely winging it.

The first episode features the magnificent Matt Novak, curator of the Paleofuture blog. We talk about past visions of the future, the boom and bust cycles of utopias and dystopias, the Jetsons, 2001: A Space Odyssey, and the Apollo programme.

If you like what you hear, you can subscribe to the podcast feed.

Needless to say, you should come to this year’s dConstruct on September 11th here in Brighton. As compensation for listening to my experiments in podcasting, I’m going to sweeten the deal. Use the discount code “ansible” to get 10% off the ticket price. Aw, yeah!

100 words 073

The future Earth we see in Interstellar is a post-apocalyptic society. The population of the planet has been reduced to just a fraction of its current level. There have been wars and food shortages. And now the planet is dying and the human race is on its way out.

But instead of showing a dog-eat-dog battle for survival in the wasteland, we see people just getting on. It goes against the conventional wisdom that presupposes that if our Hobbesian Leviathian of civilisation were to be destroyed, our lives would inevitably revert to being nasty, brutish and short.

Hope

Cennydd points to an article by Ev Williams about the pendulum swing between open and closed technology stacks, and how that pendulum doesn’t always swing back towards openness. Cennydd writes:

We often hear the idea that “open platforms always win in the end”. I’d like that: the implicit values of the web speak to my own. But I don’t see clear evidence of this inevitable supremacy, only beliefs and proclamations.

It’s true. I catch myself saying things like “I believe the open web will win out.” Statements like that worry my inner empiricist. Faith-based outlooks scare me, and rightly so. I like being able to back up my claims with data.

Only time will tell what data emerges about the eventual fate of the web, open or closed. But we can look to previous technologies and draw comparisons. That’s exactly what Tim Wu did in his book The Master Switch and Jonathan Zittrain did in The Future Of The Internet—And How To Stop It. Both make for uncomfortable reading because they challenge my belief. Wu points to radio and television as examples of systems that began as egalitarian decentralised tools that became locked down over time in ever-constricting cycles. Cennydd adds:

I’d argue this becomes something of a one-way valve: once systems become closed, profit potential tends to grow, and profit is a heavy entropy to reverse.

Of course there is always the possibility that this time is different. It may well be that fundamental architectural decisions in the design of the internet and the workings of the web mean that this particular technology has an inherent bias towards openness. There is some data to support this (and it’s an appealing thought), but again; only time will tell. For now it’s just one more supposition.

The real question—when confronted with uncomfortable ideas that challenge what you’d like to believe is true—is what do you do about it? Do you look for evidence to support your beliefs or do you discard your beliefs entirely? That second option looks like the most logical course of action, and it’s certainly one that I would endorse if there were proven facts to be acknowledged (like gravity, evolution, or vaccination). But I worry about mistaking an argument that is still being discussed for an argument that has already been decided.

When I wrote about the dangers of apparently self-evident truisms, I said:

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

That’s my fear. Only time will tell whether the closed or open forces will win the battle for the soul of the internet. But if we believe that centralised, proprietary, capitalistic forces are inherently unstoppable, then our belief will help make them so.

I hope that openness will prevail. Hope sounds like such a wishy-washy word, like “faith” or “belief”, but it carries with it a seed of resistance. Hope, faith, and belief all carry connotations of optimism, but where faith and belief sound passive, even downright complacent, hope carries the promise of action.

Margaret Atwood was asked about the futility of having hope in the face of climate change. She responded:

If we abandon hope, we’re cooked. If we rely on nothing but hope, we’re cooked. So I would say judicious hope is necessary.

Judicious hope. I like that. It feels like a good phrase to balance empiricism with optimism; data with faith.

The alternative is to give up. And if we give up too soon, we bring into being the very endgame we feared.

Cennydd finishes:

Ultimately, I vote for whichever technology most enriches humanity. If that’s the web, great. A closed OS? Sure, so long as it’s a fair value exchange, genuinely beneficial to company and user alike.

This is where we differ. Today’s fair value exchange is tomorrow’s monopoly, just as today’s revolutionary is tomorrow’s tyrant. I will fight against that future.

To side with whatever’s best for the end user sounds like an eminently sensible metric to judge a technology. But I’ve written before about where that mindset can lead us. I can easily imagine Asimov’s three laws of robotics rewritten to reflect the ethos of user-centred design, especially that first and most important principle:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

…rephrased as:

A product or interface may not injure a user or, through inaction, allow a user to come to harm.

Whether the technology driving the system behind that interface is open or closed doesn’t come into it. What matters is the interaction.

But in his later years Asimov revealed the zeroeth law, overriding even the first:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It may sound grandiose to apply this thinking to the trivial interfaces we’re building with today’s technologies, but I think it’s important to keep drilling down and asking uncomfortable questions (even if they challenge our beliefs).

That’s why I think openness matters. It isn’t enough to use whatever technology works right now to deliver the best user experience. If that short-time gain comes with a long-term price tag for our society, it’s not worth it.

I would much rather an imperfect open system to a perfect proprietary one.

I have hope in an open web …judicious hope.

Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Ordinary plenty

Aaron asked a while back “What do we own?”

I love the idea of owning your content and then syndicating it out to social networks, photo sites, and the like. It makes complete sense… Web-based services have a habit of disappearing, so we shouldn’t rely on them. The only Web that is permanent is the one we control.

But he quite rightly points out that we never truly own our own domains: we rent them. And when it comes to our servers, most of us are renting those too.

It looks like print is a safer bet for long-term storage. Although when someone pointed out that print isn’t any guarantee of perpetuity either, Aaron responded:

Sure, print pieces can be destroyed, but important works can be preserved in places like the Beinecke

Ah, but there’s the crux—that adjective, “important”. Print’s asset—the fact that it is made of atoms, not bits—is also its weak point: there are only so many atoms to go around. And so we pick and choose what we save. Inevitably, we choose to save the works that we deem to be important.

The problem is that we can’t know today what the future value of a work will be. A future president of the United States is probably updating their Facebook page right now. The first person to set foot on Mars might be posting a picture to her Instagram feed at this very moment.

One of the reasons that I love the Internet Archive is that they don’t try to prioritise what to save—they save it all. That’s in stark contrast to many national archival schemes that only attempt to save websites from their own specific country. And because the Internet Archive isn’t a profit-driven enterprise, it doesn’t face the business realities that caused Google to back-pedal from its original mission. Or, as Andy Baio put it, never trust a corporation to do a library’s job.

But even the Internet Archive, wonderful as it is, suffers from the same issue that Aaron brought up with the domain name system—it’s centralised. As long as there is just one Internet Archive organisation, all of our preservation eggs are in one magnificent basket:

Should we be concerned that the technical expertise and infrastructure for doing this work is becoming consolidated in a single organization?

Which brings us back to Aaron’s original question. Perhaps it’s less about “What do we own?” and more about “What are we responsible for?” If we each take responsibility for our own words, our own photos, our own hopes, our own dreams, we might not be able guarantee that they’ll survive forever, but we can still try everything in our power to keep them online. Maybe by acknowledging that responsibility to preserve our own works, instead of looking for some third party to do it for us, we’re taking the most important first step.

My words might not be as important as the great works of print that have survived thus far, but because they are digital, and because they are online, they can and should be preserved …along with all the millions of other words by millions of other historical nobodies like me out there on the web.

There was a beautiful moment in Cory Doctorow’s closing keynote at last year’s dConstruct. It was an aside to his main argument but it struck like a hammer. Listen in at the 20 minute mark:

They’re the raw stuff of communication. Same for tweets, and Facebook posts, and the whole bit. And this is where some cynic usually says, “Pah! This is about preserving all that rubbish on Facebook? All that garbage on Twitter? All those pictures of cats?” This is the emblem of people who want to dismiss all the stuff that happens on the internet.

And I’m supposed to turn around and say “No, no, there’s noble things on the internet too. There’s people talking about surviving abuse, and people reporting police violence, and so on.” And all that stuff is important but I’m going to speak for the banal and the trivial here for a moment.

Because when my wife comes down in the morning—and I get up first; I get up at 5am; I’m an early riser—when my wife comes down in the morning and I ask her how she slept, it’s not because I want to know how she slept. I sleep next to my wife. I know how my wife slept. The reason I ask how my wife slept is because it is a social signal that says:

I see you. I care about you. I love you. I’m here.

And when someone says something big and meaningful like “I’ve got cancer” or “I won” or “I lost my job”, the reason those momentous moments have meaning is because they’ve been built up out of this humus of a million seemingly-insignificant transactions. And if someone else’s insignificant transactions seem banal to you, it’s because you’re not the audience for that transaction.

The medieval scribes of Ireland, out on the furthermost edges of Europe, worked to preserve the “important” works. But occasionally they would also note down their own marginalia like:

Pleasant is the glint of the sun today upon these margins, because it flickers so.

Short observations of life in fewer than 140 characters. Like this lovely example written in ogham, a morse-like system of encoding the western alphabet in lines and scratches. It reads simply “latheirt”, which translates to something along the lines of “massive hangover.”

I’m glad that those “unimportant” words have also been preserved.

Centuries later, the Irish poet Patrick Kavanagh would write about the desire to “wallow in the habitual, the banal”:

Wherever life pours ordinary plenty.

Isn’t that a beautiful description of the web?

Interstelling

Jessica and I entered the basement of The Dukes at Komedia last weekend to listen to Sarah and her band Spacedog provide live musical accompaniment to short sci-fi films from the end of the nineteenth and start of the twentieth centuries.

It was part of the Cine City festival, which is still going on here in Brighton—Spacedog will also be accompanying a performance of John Wyndham’s The Midwich Cuckoos, and there’s going to be a screening of François Truffaut’s brilliant film version of Ray Bradbury’s Fahrenheit 451 in the atmospheric surroundings of Brighton’s former reference library. I might try to get along to that, although there’s a good chance that I might cry at my favourite scene. Gets me every time.

Those 100-year old sci-fi shorts featured familiar themes—time travel, monsters, expeditions to space. I was reminded of a recent gathering in San Francisco with some of my nerdiest of nerdy friends, where we discussed which decade might qualify as the golden age of science fiction cinema. The 1980s certainly punched above their weight—1982 and 1985 were particularly good years—but I also said that I think we’re having a bit of a sci-fi cinematic golden age right now. This year alone we’ve had Edge Of Tomorrow, Guardians Of The Galaxy, and Interstellar.

Ah, Interstellar!

If you haven’t seen it yet, now would be a good time to stop reading. Imagine that I’ve written the word “spoilers” in all-caps, followed by many many line breaks before continuing.

Ten days before we watched Spacedog accompanying silent black and white movies in a tiny basement theatre, Jessica and I watched Interstellar on the largest screen we could get to. We were in Seattle, which meant we had the pleasure of experiencing the film projected in 70mm IMAX at the Pacific Science Center, right by the space needle.

I really, really liked it. Or, at least, I’ve now decided that I really, really liked it. I wasn’t sure when I first left the cinema. There were many things that bothered me, and those things battled against the many, many things that I really enjoyed. But having thought about it more—and, boy, does this film encourage thought and discussion—I’ve been able to resolve quite a few of the issues I was having with the film.

I hate to admit that most of my initial questions were on the science side of things. I wish I could’ve switched off that part of my brain.

There’s an apocryphal story about an actor asking “Where’s the light coming from?”, and being told “Same place as the music.” I distinctly remember thinking that very same question during Interstellar. The first planetfall of the film lands the actors and the audience on a world in orbit around a black hole. So where’s the light coming from?

The answer turns out to be that the light is coming from the accretion disk of that black hole.

But wouldn’t the radiation from the black hole instantly fry any puny humans that approach it? Wouldn’t the planet be ripped apart by the gravitational tides?

Not if it’s a rapidly-spinning supermassive black hole with a “gentle” singularity.

These are nit-picky questions that I wish I wasn’t thinking of. But I like the fact that there are answers to those questions. It’s just that I need to seek out those answers outside the context of the movie—I should probably read Kip Thorne’s book. The movie gives hints at resolving those questions—there’s just one mention of the gentle singularity—but it’s got other priorities: narrative, plot, emotion.

Still, I wish that Interstellar had managed to answer my questions while the film was still happening. This is something that Inception managed brilliantly: for all its twistiness, you always know exactly what’s going on, which is no mean feat. I’m hoping and expecting that Interstellar will reward repeated viewings. I’m certainly really looking forward to seeing it again.

In the meantime, I’ll content myself with re-watching Inception, which makes a fascinating companion piece to Interstellar. Both films deal with time and gravity as malleable, almost malevolent forces. But whereas Cobb travels as far inward as it is possible for a human to go, Coop travels as far outward as it is possible for our species to go.

Interstellar is kind of a mess. There’s plenty of sub-par dialogue and strange narrative choices. But I can readily forgive all that because of the sheer ambition and imagination on display. I’m not just talking about the imagination and ambition of the film-makers—I’m talking about the ambition and imagination of the human race.

That’s at the heart of the film, and it’s a message I can readily get behind.

Before we even get into space, we’re shown a future that, by any reasonable definition, would be considered a dystopia. The human race has been reduced to a small fraction of its former population, technological knowledge has been lost, and the planet is dying. And yet, where this would normally be the perfect storm required to show roving bands of road warriors pillaging their way across the dusty landscape, here we get an agrarian society with no hint of violence. The nightmare scenario is not that the human race is wiped out through savagery, but that the human race dies out through a lack of ambition and imagination.

Religion isn’t mentioned once in this future, but Interstellar does feature a deus ex machina in the shape of a wormhole that saves the day for the human race. I really like the fact that this deus ex machina isn’t something that’s revealed at the end of the movie—it’s revealed very early on. The whole plot turns out to be a glorious mash-up of two paradoxes: the bootstrap paradox and the twin paradox.

The end result feels like a mixture of two different works by Arthur C. Clarke: The Songs Of Distant Earth and 2001: A Space Odyssey.

2001 is the more obvious work to compare it to, and the film readily invites that comparison. Many reviewers have been quite to point out that Interstellar doesn’t reach the same heights as Kubrick’s 2001. That’s a fair point. But then again, I’m not sure that any film can ever reach the bar set by 2001. I honestly think it’s as close to perfect as any film has ever come.

But I think it’s worth pointing out that when 2001 was released, it was not greeted with universal critical acclaim. Quite the opposite. Many reviewers found it tedious, cold, and baffling. It divided opinion greatly …much like Interstellar is doing now.

In some ways, Interstellar offers a direct challenge to 2001—what if mankind’s uplifting is not caused by benevolent alien beings, but by the distant descendants of the human race?

This is revealed as a plot twist, but it was pretty clearly signposted from early in the film. So, not much of a plot twist then, right?

Well, maybe not. What if Coop’s hypothesis—that the wormhole is the creation of future humans—isn’t entirely correct? He isn’t the only one who crosses the event horizon. He is accompanied by the robot TARS. In the end, the human race is saved by the combination of Coop the human’s connection to his daughter, and the analysis carried out by TARS. Perhaps what we’re witnessing there is a glimpse of the true future for our species; human-machine collaboration. After all, if humanity is going to transcend into a fifth-dimensional species at some future point, it’s unlikely to happen through biology alone. But if you combine the best of the biological—a parent’s love for their child—with the best of technology, then perhaps our post-human future becomes not only plausible, but inevitable.

Deus ex machina.

Thinking about the future of the species in this co-operative way helps alleviate the uncomfortable feeling I had that Interstellar was promoting a kind of Manifest Destiny for the human race …although I’m not sure that I’m any more comfortable with that being replaced by a benevolent technological determinism.

Polyfills and products

I was chatting about polyfills recently with Bruce and Remy—who coined the term:

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.

Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.

Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.

Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.

Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.

And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.

Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).

That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.

It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.

I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.

I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.

But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?