Journal tags: time

15

sparkline

The Web History podcast

From day one, I’ve been a fan of Jay Hoffman’s project The History Of The Web—both the newsletter and the evolving timeline.

Recently Jay started publishing essays on web history over on CSS Tricks:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Round about that time, Chris floated the idea of having people record themselves reading blog posts. I immediately volunteered my services for the web history essays.

So now you can listen to me reading Jay’s words:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Each chapter is round about half an hour long so that’s a solid two hours or so of me yapping.

Should you wish to take the audio with you wherever you go, I’ve made a podcast feed for you. Pop that in your podcatching software of choice. Here it is on Apple Podcasts. Here it is on Spotify.

And if you just can’t get enough of my voice, there’s always the Clearleft podcast …although that’s mostly other people talking, thank goodness.

T E N Ǝ T

Jessica and I went to cinema yesterday.

Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.

But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.

We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.

The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.

At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.

I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.

The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.

The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.

Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”

The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.

Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.

In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.

The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).

The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.

In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).

In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.

Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.

That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.

In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.

The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.

Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.

Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.

It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.

Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.

The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.

The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…

There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.

Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.

This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.

The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.

This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:

It’s essential that you behave as if your decisions matter, even though you know that they don’t.

That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.

As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.

The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.

At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.

Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.

But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?

This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.

So Tenet has three instances of misdirection in its narrative:

  • Inversion isn’t time travel (it absolutely is).
  • Decisions matter (they don’t; there is no free will).
  • The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).

I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.

I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?

Trad time

Fifteen years ago, I went to the Willie Clancy Summer School in Miltown Malbay:

I’m back from the west of Ireland. I was sorry to leave. I had a wonderful, music-filled time.

I’m not sure why it took me a decade and a half to go back, but that’s what I did last week. Myself and Jessica once again immersed ourselves in Irish tradtional music. I’ve written up a trip report over on The Session.

On the face of it, fifteen years is a long time. Last time I made the trip to county Clare, I was taking pictures on a point-and-shoot camera. I had a phone with me, but it had a T9 keyboard that I could use for texting and not much else. Also, my hair wasn’t grey.

But in some ways, fifteen years feels like the blink of an eye.

I spent my mornings at the Willie Clancy Summer School immersed in the history of Irish traditional music, with Paddy Glackin as a guide. We were discussing tradition and change in generational timescales. There was plenty of talk about technology, but we were as likely to discuss the influence of the phonograph as the influence of the internet.

Outside of the classes, there was a real feeling of lengthy timescales too. On any given day, I would find myself listening to pre-teen musicians at one point, and septegenarian masters at another.

Now that I’m back in the Clearleft studio, I’m finding it weird to adjust back in to the shorter timescales of working on the web. Progress is measured in weeks and months. Technologies are deemed outdated after just a year or two.

The one bridging point I have between these two worlds is The Session. It’s been going in one form or another for over twenty years. And while it’s very much on and of the web, it also taps into a longer tradition. Over time it has become an enormous repository of tunes, for which I feel a great sense of responsibility …but in a good way. It’s not something I take lightly. It’s also something that gives me great satisfaction, in a way that’s hard to achieve in the rapidly moving world of the web. It’s somewhat comparable to the feelings I have for my own website, where I’ve been writing for eighteen years. But whereas adactio.com is very much focused on me, thesession.org is much more of a community endeavour.

I question sometimes whether The Session is helping or hindering the Irish music tradition. “It all helps”, Paddy Glackin told me. And I have to admit, it was very gratifying to meet other musicians during Willie Clancy week who told me how much the site benefits them.

I think I benefit from The Session more than anyone though. It keeps me grounded. It gives me a perspective that I don’t think I’d otherwise get. And in a time when it feels entirely to right to question whether the internet is even providing a net gain to our world, I take comfort in being part of a project that I think uses the very best attributes of the World Wide Web.

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

Notifications

I’ve written before about how I use apps on my phone:

If I install an app on my phone, the first thing I do is switch off all notifications. That saves battery life and sanity.

The only time my phone is allowed to ask for my attention is for phone calls, SMS, or FaceTime (all rare occurrences). I initiate every other interaction—Twitter, Instagram, Foursquare, the web. My phone is a tool that I control, not the other way around.

To me, this seems like a perfectly sensible thing to do. I was surprised by how others thought it was radical and extreme.

I’m always shocked when I’m out and about with someone who has their phone set up to notify them of any activity—a mention on Twitter, a comment on Instagram, or worst of all, an email. The thought of receiving a notification upon receipt of an email gives me the shivers. Allowing those kinds of notifications would feel like putting shackles on my time and attention. Instead, I think I’m applying an old-school RSS mindset to app usage: pull rather than push.

Don’t get me wrong: I use apps on my phone all the time: Twitter, Instagram, Swarm (though not email, except in direst emergency). Even without enabling notifications, I still have to fight the urge to fiddle with my phone—to check to see if anything interesting is happening. I’d like to think I’m in control of my phone usage, but I’m not sure that’s entirely true. But I do know that my behaviour would be a lot, lot worse if notifications were enabled.

I was a bit horrified when Apple decided to port this notification model to the desktop. There doesn’t seem to be any way of removing the “notification tray” altogether, but I can at least go into System Preferences and make sure that absolutely nothing is allowed to pop up an alert while I’m trying to accomplish some other task.

It’s the same on iOS—you can control notifications from Settings—but there’s an added layer within the apps themselves. If you have notifications disabled, the apps encourage you to enable them. That’s fine …at first. Being told that I could and should enable notifications is a perfectly reasonable part of the onboarding process. But with some apps I’m told that I should enable notifications Every. Single. Time.

Instagram Swarm

Of the apps I use, Instagram and Swarm are the worst offenders (I don’t have Facebook or Snapchat installed so I don’t know whether they’re as pushy). This behaviour seems to have worsened recently. The needling has been dialed up in recent updates to the apps. It doesn’t matter how often I dismiss the dialogue, it reappears the next time I open the app.

Initially I thought this might be a bug. I’ve submitted bug reports to Instagram and Swarm, but I’m starting to think that they see my bug as their feature.

In the grand scheme of things, it’s not a big deal, but I would appreciate some respect for my deliberate choice. It gets pretty wearying over the long haul. To use a completely inappropriate analogy, it’s like a recovering alcoholic constantly having to rebuff “friends” asking if they’re absolutely sure they don’t want a drink.

I don’t think there’s malice at work here. I think it’s just that I’m an edge-case scenario. They’ve thought about the situation where someone doesn’t have notifications enabled, and they’ve come up with a reasonable solution: encourage that person to enable notifications. After all, who wouldn’t want notifications? That question, if it’s asked at all, is only asked rhetorically.

I’m trying to do the healthy thing here (or at least the healthier thing) in being mindful of my app usage. They sure aren’t making it easy.

The model that web browsers use for notifications seems quite sensible in comparison. If you arrive on a site that asks for permission to send you notifications (without even taking you out to dinner first) then you have three options: allow, block, or dismiss. If you choose “block”, that site will never be able to ask that browser for permission to enable notifications. Ever. (Oh, how I wish I could apply that browser functionality to all those sites asking me to sign up for their newsletter!)

That must seem like the stuff of nightmares for growth-hacking disruptive startups looking to make their graphs go up and to the right, but it’s a wonderful example of truly user-centred design. In that situation, the browser truly feels like a user agent.

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

dConstruct 2015 podcast: Ingrid Burrington

The dConstruct podcast episodes are coming thick and fast. Hot on the heels of the inaugural episode with Matt Novak and the sophomore episode with Josh Clark comes the third in the series: the one with Ingrid Burrington.

This was a fun meeting of minds. We geeked out about the physical infrastructure of the internet and time-travel narratives, from The Terminator to The Peripheral. During the episode, I sounded the spoiler warning in case you haven’t read that book, but we didn’t actually end up giving anything away.

I really enjoyed this chat with Ingrid. I hope you’ll enjoy listening to it.

Oh, and now you can subscribe to the dConstruct 2015 podcast directly from iTunes.

And remember, as a podcast listener, you get 10% off the ticket price for dConstruct using the discount code “ansible.”

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Wibbly-wobbly timey-wimey stuff

I met up with Remy a few months back to try to help him finalise the line-up for this year’s Full Frontal conference. Remy puts a lot of thought into crafting a really solid line-up. He was in a good position too: the conference was already sold out so he didn’t have to worry about having a big-name speaker to put bums on seats—he could concentrate entirely on finding just the right speaker for the final talk.

He described the kind of “big picture” talk he was looking for, and I started naming some names and giving him some ideas of people to contact.

Imagine my surprise then, when—while we were both in New York for Brooklyn Beta—I received a lengthy email from Remy (pecked out on his phone), saying that he had decided who wanted to do the closing talk at Full Frontal. He wanted me to do it.

Now, this was just a couple of weeks ago so my first thought was “No way! I don’t have enough time to prepare a talk.” It takes me quite a while to prepare a new presentation.

But then he described—in quite some detail—what he wanted me to talk about …and it’s exactly the kind of stuff that I really enjoy geeking out about: long-term thinking, digital preservation, and all that jazz. So I said yes.

That’s why I’ve spent the last couple of weeks quietly freaking out, attempting to marshall my thoughts and squeeze them into Keynote. The title of my talk is Time. Pretentious? Moi?

I’m trying to pack a lot into this presentation. I’ve already had to kill some of my darlings and drop some of the more esoteric stuff, but damn it, it’s hard to still squeeze everything in.

I’ve been immersed in research and link-making, reading and huffduffing all things time-related. In the course of my hypertravels, I discovered that there’s an entire event devoted to “the origins, evolution, and future of public time.” It’s called Time For Everyone and it’s taking place in California …at exactly the same time as Full Frontal.

Here’s the funny thing: the description for the event is exactly the same as the description I gave Remy for my talk:

This thing all things devours:
Birds, beasts, trees, flowers;
Gnaws iron, bites steel;
Grinds hard stones to meal;
Slays king, ruins town,
And beats high mountain down.

If you’re coming along to Full Frontal next Friday, I hope you’ll be in a receptive mood. I also hope that Remy won’t mind that what I’m going to present isn’t exactly what he asked for …but I think it’s interesting stuff.

I just wish I had more time.

Long time

A few years back, I was on a road trip in the States with my friend Dan. We drove through Maryland and Virginia to the sites of American Civil War battles—Gettysburg, Antietam. I was reading Tom Standage’s magnificent book The Victorian Internet at the time. When I was done with the book, I passed it on to Dan. He loved it. A few years later, he sent me a gift: a glass telegraph insulator.

Glass telegraph insulator from New York

Last week I received another gift from Dan: a telegraph key.

Telegraph key

It’s lovely. If my knowledge of basic electronics were better, I’d hook it up to an Arduino and tweet with it.

Dan came over to the UK for a visit last month. We had a lovely time wandering around Brighton and London together. At one point, we popped into the National Portrait Gallery. There was one painting he really wanted to see: the portrait of Samuel Pepys.

Pepys

“Were you reading the online Pepys diary?”, I asked.

“Oh, yes!”, he said.

“I know the guy who did that!”

The “guy who did that” is, of course, the brilliant Phil Gyford.

Phil came down to Brighton and gave a Skillswap talk all about the ten-year long project.

The diary of Samuel Pepys: Telling a complex story online on Huffduffer

Now Phil has restarted the diary. He wrote a really great piece about what it’s like overhauling a site that has been online for a decade. Given that I spent a lot of my time last year overhauling The Session (which has been online in some form or another since the late nineties), I can relate to his perspective on trying to choose long-term technologies:

Looking ahead, how will I feel about this Django backend in ten years’ time? I’ve no idea what the state of the platform will be in a decade.

I was thinking about switching The Session over to Django, but I decided against it in the end. I figured that the pain involved in trying to retrofit an existing site (as opposed to starting a brand new project) would be too much. So the site is still written in the very uncool LAMP stack: Linux, Apache, MySQL, and PHP.

Mind you, Marco Arment makes the point in his Webstock talk that there’s a real value to using tried and tested “boring” technologies.

One area where I’ve found myself becoming increasingly wary over time is the use of third-party APIs. I say that with a heavy heart—back at dConstruct 2006 I was talking all about The Joy of API. But Yahoo, Google, Twitter …they’ve all deprecated or backtracked on their offerings to developers.

Anyway, this is something that has been on my mind a lot lately: evaluating technologies and services in terms of their long-term benefit instead of just their short-term hit. It’s something that we need to think about more as developers, and it’s certainly something that we need to think about more as users.

Compared with genuinely long-term projects like the 10,000 year Clock of the Long Now making something long-lasting on the web shouldn’t be all that challenging. The real challenge is acknowledging that this is even an issue. As Phil puts it:

I don’t know how much individuals and companies habitually think about this. Is it possible to plan for how your online service will work over the next ten years, never mind longer?

As my Long Bet illustrates, I can be somewhat pessimistic about the longevity of our web creations:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

But I really hope I lose that bet. Maybe I’ll suggest to Matt (my challenger on the bet) that we meet up on February 22nd, 2022 at the Long Now Salon. It doesn’t exist yet. But give it time.

A question of time

Some of the guys at work occasionally provide answers to .net magazine’s “big question” feature. When they told me about the latest question that landed in their inboxes, I felt I just had to stick my oar in and provide my answer.

I’m publishing my response here, so that if they decide not to publish it in the magazine or on the website (or if they edit it down), I’ve got a public record of my stance on this very important topic.

The question is:

If you could send a message back to younger designer or developer self, what would it say? What professional advice would you give a younger you?

This is my answer:

Rather than send a message back to my younger self, I would destroy the message-sending technology immediately. The potential for universe-ending paradoxes is too great.

I know that it would be tempting to give some sort of knowledge of the future to my younger self, but it would be the equivalent of attempting to kill Hitler—that never ends well.

Any knowledge I supplied to my past self would cause my past self to behave differently, thereby either:

  1. destroying the timeline that my present self inhabits (assuming a branching many-worlds multiverse) or
  2. altering my present self, possibly to the extent that the message-sending technology never gets invented. Instant paradox.

But to answer your question, if I could send a message back to a younger designer or developer self, the professional advice I would give would be:

Jeremy,

When, at some point in the future, you come across the technology capable of sending a message like this back to your past self, destroy it immediately!

But I know that you will not heed this advice. If you did, you wouldn’t be reading this.

On the other hand, I have no memory of ever receiving this message, so perhaps you did the right thing after all.

Jeremy

Of Time and the Network and the Long Bet

When I went to Webstock, I prepared a new presentation called Of Time And The Network:

Our perception and measurement of time has changed as our civilisation has evolved. That change has been driven by networks, from trade routes to the internet.

I was pretty happy with how it turned out. It was a 40 minute talk that was pretty evenly split between the past and the future. The first 20 minutes spanned from 5,000 years ago to the present day. The second 20 minutes looked towards the future, first in years, then decades, and eventually in millennia. I was channeling my inner James Burke for the first half and my inner Jason Scott for the second half, when I went off on a digital preservation rant.

You can watch the video and I had the talk transcribed so you can read the whole thing.

It’s also on Huffduffer, if you’d rather listen to it.

Adactio: Articles—Of Time And The Network on Huffduffer

Webstock: Jeremy Keith

During the talk, I pointed to my prediction on the Long Bets site:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd last year (a terrible day for New Zealand). The prediction will reach fruition on 02022-02-22 …I quite like the alliteration of that date.

Here’s how I justified the prediction:

“Cool URIs don’t change” wrote Tim Berners-Lee in 01999, but link rot is the entropy of the web. The probability of a web document surviving in its original location decreases greatly over time. I suspect that even a relatively short time period (eleven years) is too long for a resource to survive.

Well, during his excellent Webstock talk Matt announced that he would accept the challenge. He writes:

Though much of the web is ephemeral in nature, now that we have surpassed the 20 year mark since the web was created and gone through several booms and busts, technology and strategies have matured to the point where keeping a site going with a stable URI system is within reach of anyone with moderate technological knowledge.

The prediction has now officially been added to the list of bets.

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

The sysadmin for the Long Bets site is watching this bet with great interest. I am, of course, treating this bet in much the same way that Paul Gilster is treating this optimistic prediction about interstellar travel: I would love to be proved wrong.

The detailed terms of the bet have been set as follows:

On February 22nd, 2022 from 00:01 UTC until 23:59 UTC,
entering the characters http://www.longbets.org/601 into the address bar of a web browser or command line tool (like curl)
OR
using a web browser to follow a hyperlink that points to http://www.longbets.org/601
MUST
return an HTML document that still contains the following text:
“The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.”

The suspense is killing me!

Timeless

When I first heard that Hixie had removed all traces of the time element from the ongoing HTML spec, my knee-jerk reaction was “This is a really bad idea!” But I decided not to jump in without first evaluating the arguments for and against the element’s removal. That’s what I’ve been doing over the past week and my considered response is:

This is a really bad idea!

The process by which the element was removed is quite disturbing:

  1. Hixie (as a contributer) opens a bug proposing that the time element be replaced with the more general data element.
  2. Lots of people respond, almost unanimously pointing out the problems with that proposal.
  3. Hixie (as the editor) goes ahead and does what exactly what he wanted anyway.

Technically that’s exactly how the WHATWG process works. The editor does whatever he wants:

This is not a consensus-based approach — there’s no guarantee that everyone will be happy! There is also no voting.

Most of the time, this works pretty well. It might not be fair but it seems to work more efficiently than the W3C’s consensus-based approach. But in this case the editor’s unilateral decision is fundamentally at odds with the most important HTML design principle, the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone.

The specifier (Hixie) is riding roughshod over the concerns of authors.

I’m particularly concerned by the uncharacteristically muddy thinking behind Hixie’s decision. There are two separate issues here:

  1. Is the time element useful?
  2. Do we need a more general data element?

Hixie conflates these two questions.

He begins by bizarrely making the claim that time hasn’t had much uptake. This is demonstrably false. It’s already shipping in Drupal builds and Wordpress templates as well as having parser support in services and at least one browser. If anything, time is one of the more commonly used and understood elements in HTML5.

There’s a very good reason for that: it fulfils a need that authors have had for a long time—the ability to make a timestamp that is human and machine-readable at the same time. That’s the use case that the microformats community has been trying to solve with the abbr design pattern and the value-class pattern. Those solutions are okay, but not nearly as elegant and intuitive as having a dedicated time element.

Crucially the time element didn’t just specify the mechanism for encoding a machine-readable timestamp, it also defined the format, namely a subset of ISO 8601. That’s exactly the kind of unambiguous documentation that makes a specification useful.

Hixie correctly points out that there are cases for human and machine-readable data other than dates and times. He incorrectly jumps to the conclusion that the time is therefore a failure.

I think he’s right that there probably should be a dedicated element for marking up this kind of data. We already have the meta element but the fact that it’s a standalone element makes it tricky to explicitly associate the human-readable text. So the introduction of a new data may very well turn out to be a good idea. But it does not need to be introduced at the expense of the more specific time element.

I think there’s a comparison to be made with sectioning content. We’ve got the generic section element but then we also have the more specific nav, aside and article elements that can be thought of specialised forms of section. By Hixie’s logic, there’s no reason to have nav, aside and article when a section element has the same effect on the outlining algorithm. But we have those elements because they cover very common use cases.

Hixie’s reductio ad absurdum argument is that if there is a special element for timestamps, we should also have an element for every other possible piece of machine-readable data. If that’s the case we should also have an infinite amount of sectioning content elements: blogpost, comment, chapter, etc. Instead we have just the four elements—section, article, aside and nav—because they represent the most common use cases …perhaps 80%?

The use case that the time element satisfies (human and machine-readable timestamps) is very, very common. The actual mechanism may vary (the time element itself, the abbr pattern, etc.) but the cow path is very much in need of paving.

There’s also a disturbingly Boolean trait to Hixie’s logic. He lumps all machine-readable data into the same bucket. If he had paid attention to Tantek’s research on abstracting microformat vocabularies, he would have seen that it’s a bit more fine-grained than that. There’s a difference between straightforward name/value pairs (like fn or summary), URL values (like photo) and timestamps (like dtstart). The thing that distinguishes timestamps is that they have an existing predefined machine-readable format.

To strengthen his position Hixie introduced two strawmen into the discussion, claiming that the time element was intended to allow easier styling of dates and times with CSS and to also allow conversion of HTML documents into Atom. Those use cases are completely tangential to the fundamental reason for the existence of the time element. Hixie seems to have forgotten that he himself once had it enshrined in the spec that the purpose of the element was to allow users to add events to their calendars. I pointed out that this was an example usage of the more general pattern of machine-readable dates and times so the text of the spec was updated accordingly:

This element is intended as a way to encode modern dates and times in a machine-readable way so that, for example, user agents can offer to add birthday reminders or scheduled events to the user’s calendar.

No mention of styling. No mention of converting documents to Atom.

I’m not the only one who is perplexed by Hixie’s bullheaded behaviour. Steve Faulkner has entered a revert request on the W3C side of things (he’s a braver man than me: the byzantine W3C process scares me off). Ben summed up the situation nicely on Twitter. You can find plenty of other reactions on Twitter by searching for the hashtag #occupyhtml5. Bruce has written down his thoughts and the follow-on comments are worth reading. This reaction from Stephanie is both heartbreaking and completely understandable:

I have been pretty frustrated by this change. In fact, when I read the entire thread on the debacle, it nearly made me want to give up on teaching, and using HTML5. What’s the point when there’s really no discussion? A single person brings something up. Great arguments are made. And then he just does what he originally wanted to do anyway.

I’d like to think that a concerted campaign could sway Hixie but I don’t hold out much hope. Usually there’s only one way to get through to him and that’s by presenting data. Rightly so. In this case however, Hixie is ignoring the data completely. He’s also wilfully violating the fundamental design principles behind HTML5.

So what can we do? Well, just as with the incorrectly-defined semantics of the cite element we can make a stand and simply carry on using the time element in our web pages. If we do, then we’ll see more parsers and browsers implementing support for the time element. The fact that our documentation has been ripped away makes this trickier but it’s such a demonstrably useful addition to HTML that we cannot afford to throw it away based on the faulty logic of one person.

Hixie once said:

The reality is that the browser vendors have the ultimate veto on everything in the spec, since if they don’t implement it, the spec is nothing but a work of fiction. So they have a lot of influence—I don’t want to be writing fiction, I want to be writing a spec that documents the actual behaviour of browsers.

Keep using the time element.