Tags: time

10

sparkline

Notifications

I’ve written before about how I use apps on my phone:

If I install an app on my phone, the first thing I do is switch off all notifications. That saves battery life and sanity.

The only time my phone is allowed to ask for my attention is for phone calls, SMS, or FaceTime (all rare occurrences). I initiate every other interaction—Twitter, Instagram, Foursquare, the web. My phone is a tool that I control, not the other way around.

To me, this seems like a perfectly sensible thing to do. I was surprised by how others thought it was radical and extreme.

I’m always shocked when I’m out and about with someone who has their phone set up to notify them of any activity—a mention on Twitter, a comment on Instagram, or worst of all, an email. The thought of receiving a notification upon receipt of an email gives me the shivers. Allowing those kinds of notifications would feel like putting shackles on my time and attention. Instead, I think I’m applying an old-school RSS mindset to app usage: pull rather than push.

Don’t get me wrong: I use apps on my phone all the time: Twitter, Instagram, Swarm (though not email, except in direst emergency). Even without enabling notifications, I still have to fight the urge to fiddle with my phone—to check to see if anything interesting is happening. I’d like to think I’m in control of my phone usage, but I’m not sure that’s entirely true. But I do know that my behaviour would be a lot, lot worse if notifications were enabled.

I was a bit horrified when Apple decided to port this notification model to the desktop. There doesn’t seem to be any way of removing the “notification tray” altogether, but I can at least go into System Preferences and make sure that absolutely nothing is allowed to pop up an alert while I’m trying to accomplish some other task.

It’s the same on iOS—you can control notifications from Settings—but there’s an added layer within the apps themselves. If you have notifications disabled, the apps encourage you to enable them. That’s fine …at first. Being told that I could and should enable notifications is a perfectly reasonable part of the onboarding process. But with some apps I’m told that I should enable notifications Every. Single. Time.

Instagram Swarm

Of the apps I use, Instagram and Swarm are the worst offenders (I don’t have Facebook or Snapchat installed so I don’t know whether they’re as pushy). This behaviour seems to have worsened recently. The needling has been dialed up in recent updates to the apps. It doesn’t matter how often I dismiss the dialogue, it reappears the next time I open the app.

Initially I thought this might be a bug. I’ve submitted bug reports to Instagram and Swarm, but I’m starting to think that they see my bug as their feature.

In the grand scheme of things, it’s not a big deal, but I would appreciate some respect for my deliberate choice. It gets pretty wearying over the long haul. To use a completely inappropriate analogy, it’s like a recovering alcoholic constantly having to rebuff “friends” asking if they’re absolutely sure they don’t want a drink.

I don’t think there’s malice at work here. I think it’s just that I’m an edge-case scenario. They’ve thought about the situation where someone doesn’t have notifications enabled, and they’ve come up with a reasonable solution: encourage that person to enable notifications. After all, who wouldn’t want notifications? That question, if it’s asked at all, is only asked rhetorically.

I’m trying to do the healthy thing here (or at least the healthier thing) in being mindful of my app usage. They sure aren’t making it easy.

The model that web browsers use for notifications seems quite sensible in comparison. If you arrive on a site that asks for permission to send you notifications (without even taking you out to dinner first) then you have three options: allow, block, or dismiss. If you choose “block”, that site will never be able to ask that browser for permission to enable notifications. Ever. (Oh, how I wish I could apply that browser functionality to all those sites asking me to sign up for their newsletter!)

That must seem like the stuff of nightmares for growth-hacking disruptive startups looking to make their graphs go up and to the right, but it’s a wonderful example of truly user-centred design. In that situation, the browser truly feels like a user agent.

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

dConstruct 2015 podcast: Ingrid Burrington

The dConstruct podcast episodes are coming thick and fast. Hot on the heels of the inaugural episode with Matt Novak and the sophomore episode with Josh Clark comes the third in the series: the one with Ingrid Burrington.

This was a fun meeting of minds. We geeked out about the physical infrastructure of the internet and time-travel narratives, from The Terminator to The Peripheral. During the episode, I sounded the spoiler warning in case you haven’t read that book, but we didn’t actually end up giving anything away.

I really enjoyed this chat with Ingrid. I hope you’ll enjoy listening to it.

Oh, and now you can subscribe to the dConstruct 2015 podcast directly from iTunes.

And remember, as a podcast listener, you get 10% off the ticket price for dConstruct using the discount code “ansible.”

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Wibbly-wobbly timey-wimey stuff

I met up with Remy a few months back to try to help him finalise the line-up for this year’s Full Frontal conference. Remy puts a lot of thought into crafting a really solid line-up. He was in a good position too: the conference was already sold out so he didn’t have to worry about having a big-name speaker to put bums on seats—he could concentrate entirely on finding just the right speaker for the final talk.

He described the kind of “big picture” talk he was looking for, and I started naming some names and giving him some ideas of people to contact.

Imagine my surprise then, when—while we were both in New York for Brooklyn Beta—I received a lengthy email from Remy (pecked out on his phone), saying that he had decided who wanted to do the closing talk at Full Frontal. He wanted me to do it.

Now, this was just a couple of weeks ago so my first thought was “No way! I don’t have enough time to prepare a talk.” It takes me quite a while to prepare a new presentation.

But then he described—in quite some detail—what he wanted me to talk about …and it’s exactly the kind of stuff that I really enjoy geeking out about: long-term thinking, digital preservation, and all that jazz. So I said yes.

That’s why I’ve spent the last couple of weeks quietly freaking out, attempting to marshall my thoughts and squeeze them into Keynote. The title of my talk is Time. Pretentious? Moi?

I’m trying to pack a lot into this presentation. I’ve already had to kill some of my darlings and drop some of the more esoteric stuff, but damn it, it’s hard to still squeeze everything in.

I’ve been immersed in research and link-making, reading and huffduffing all things time-related. In the course of my hypertravels, I discovered that there’s an entire event devoted to “the origins, evolution, and future of public time.” It’s called Time For Everyone and it’s taking place in California …at exactly the same time as Full Frontal.

Here’s the funny thing: the description for the event is exactly the same as the description I gave Remy for my talk:

This thing all things devours:
Birds, beasts, trees, flowers;
Gnaws iron, bites steel;
Grinds hard stones to meal;
Slays king, ruins town,
And beats high mountain down.

If you’re coming along to Full Frontal next Friday, I hope you’ll be in a receptive mood. I also hope that Remy won’t mind that what I’m going to present isn’t exactly what he asked for …but I think it’s interesting stuff.

I just wish I had more time.

Long time

A few years back, I was on a road trip in the States with my friend Dan. We drove through Maryland and Virginia to the sites of American Civil War battles—Gettysburg, Antietam. I was reading Tom Standage’s magnificent book The Victorian Internet at the time. When I was done with the book, I passed it on to Dan. He loved it. A few years later, he sent me a gift: a glass telegraph insulator.

Glass telegraph insulator from New York

Last week I received another gift from Dan: a telegraph key.

Telegraph key

It’s lovely. If my knowledge of basic electronics were better, I’d hook it up to an Arduino and tweet with it.

Dan came over to the UK for a visit last month. We had a lovely time wandering around Brighton and London together. At one point, we popped into the National Portrait Gallery. There was one painting he really wanted to see: the portrait of Samuel Pepys.

Pepys

“Were you reading the online Pepys diary?”, I asked.

“Oh, yes!”, he said.

“I know the guy who did that!”

The “guy who did that” is, of course, the brilliant Phil Gyford.

Phil came down to Brighton and gave a Skillswap talk all about the ten-year long project.

The diary of Samuel Pepys: Telling a complex story online on Huffduffer

Now Phil has restarted the diary. He wrote a really great piece about what it’s like overhauling a site that has been online for a decade. Given that I spent a lot of my time last year overhauling The Session (which has been online in some form or another since the late nineties), I can relate to his perspective on trying to choose long-term technologies:

Looking ahead, how will I feel about this Django backend in ten years’ time? I’ve no idea what the state of the platform will be in a decade.

I was thinking about switching The Session over to Django, but I decided against it in the end. I figured that the pain involved in trying to retrofit an existing site (as opposed to starting a brand new project) would be too much. So the site is still written in the very uncool LAMP stack: Linux, Apache, MySQL, and PHP.

Mind you, Marco Arment makes the point in his Webstock talk that there’s a real value to using tried and tested “boring” technologies.

One area where I’ve found myself becoming increasingly wary over time is the use of third-party APIs. I say that with a heavy heart—back at dConstruct 2006 I was talking all about The Joy of API. But Yahoo, Google, Twitter …they’ve all deprecated or backtracked on their offerings to developers.

Anyway, this is something that has been on my mind a lot lately: evaluating technologies and services in terms of their long-term benefit instead of just their short-term hit. It’s something that we need to think about more as developers, and it’s certainly something that we need to think about more as users.

Compared with genuinely long-term projects like the 10,000 year Clock of the Long Now making something long-lasting on the web shouldn’t be all that challenging. The real challenge is acknowledging that this is even an issue. As Phil puts it:

I don’t know how much individuals and companies habitually think about this. Is it possible to plan for how your online service will work over the next ten years, never mind longer?

As my Long Bet illustrates, I can be somewhat pessimistic about the longevity of our web creations:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

But I really hope I lose that bet. Maybe I’ll suggest to Matt (my challenger on the bet) that we meet up on February 22nd, 2022 at the Long Now Salon. It doesn’t exist yet. But give it time.

A question of time

Some of the guys at work occasionally provide answers to .net magazine’s “big question” feature. When they told me about the latest question that landed in their inboxes, I felt I just had to stick my oar in and provide my answer.

I’m publishing my response here, so that if they decide not to publish it in the magazine or on the website (or if they edit it down), I’ve got a public record of my stance on this very important topic.

The question is:

If you could send a message back to younger designer or developer self, what would it say? What professional advice would you give a younger you?

This is my answer:

Rather than send a message back to my younger self, I would destroy the message-sending technology immediately. The potential for universe-ending paradoxes is too great.

I know that it would be tempting to give some sort of knowledge of the future to my younger self, but it would be the equivalent of attempting to kill Hitler—that never ends well.

Any knowledge I supplied to my past self would cause my past self to behave differently, thereby either:

  1. destroying the timeline that my present self inhabits (assuming a branching many-worlds multiverse) or
  2. altering my present self, possibly to the extent that the message-sending technology never gets invented. Instant paradox.

But to answer your question, if I could send a message back to a younger designer or developer self, the professional advice I would give would be:

Jeremy,

When, at some point in the future, you come across the technology capable of sending a message like this back to your past self, destroy it immediately!

But I know that you will not heed this advice. If you did, you wouldn’t be reading this.

On the other hand, I have no memory of ever receiving this message, so perhaps you did the right thing after all.

Jeremy

Of Time and the Network and the Long Bet

When I went to Webstock, I prepared a new presentation called Of Time And The Network:

Our perception and measurement of time has changed as our civilisation has evolved. That change has been driven by networks, from trade routes to the internet.

I was pretty happy with how it turned out. It was a 40 minute talk that was pretty evenly split between the past and the future. The first 20 minutes spanned from 5,000 years ago to the present day. The second 20 minutes looked towards the future, first in years, then decades, and eventually in millennia. I was channeling my inner James Burke for the first half and my inner Jason Scott for the second half, when I went off on a digital preservation rant.

You can watch the video and I had the talk transcribed so you can read the whole thing.

It’s also on Huffduffer, if you’d rather listen to it.

Adactio: Articles—Of Time And The Network on Huffduffer

Webstock: Jeremy Keith

During the talk, I pointed to my prediction on the Long Bets site:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd last year (a terrible day for New Zealand). The prediction will reach fruition on 02022-02-22 …I quite like the alliteration of that date.

Here’s how I justified the prediction:

“Cool URIs don’t change” wrote Tim Berners-Lee in 01999, but link rot is the entropy of the web. The probability of a web document surviving in its original location decreases greatly over time. I suspect that even a relatively short time period (eleven years) is too long for a resource to survive.

Well, during his excellent Webstock talk Matt announced that he would accept the challenge. He writes:

Though much of the web is ephemeral in nature, now that we have surpassed the 20 year mark since the web was created and gone through several booms and busts, technology and strategies have matured to the point where keeping a site going with a stable URI system is within reach of anyone with moderate technological knowledge.

The prediction has now officially been added to the list of bets.

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

The sysadmin for the Long Bets site is watching this bet with great interest. I am, of course, treating this bet in much the same way that Paul Gilster is treating this optimistic prediction about interstellar travel: I would love to be proved wrong.

The detailed terms of the bet have been set as follows:

On February 22nd, 2022 from 00:01 UTC until 23:59 UTC,
entering the characters http://www.longbets.org/601 into the address bar of a web browser or command line tool (like curl)
OR
using a web browser to follow a hyperlink that points to http://www.longbets.org/601
MUST
return an HTML document that still contains the following text:
“The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.”

The suspense is killing me!

Timeless

When I first heard that Hixie had removed all traces of the time element from the ongoing HTML spec, my knee-jerk reaction was “This is a really bad idea!” But I decided not to jump in without first evaluating the arguments for and against the element’s removal. That’s what I’ve been doing over the past week and my considered response is:

This is a really bad idea!

The process by which the element was removed is quite disturbing:

  1. Hixie (as a contributer) opens a bug proposing that the time element be replaced with the more general data element.
  2. Lots of people respond, almost unanimously pointing out the problems with that proposal.
  3. Hixie (as the editor) goes ahead and does what exactly what he wanted anyway.

Technically that’s exactly how the WHATWG process works. The editor does whatever he wants:

This is not a consensus-based approach — there’s no guarantee that everyone will be happy! There is also no voting.

Most of the time, this works pretty well. It might not be fair but it seems to work more efficiently than the W3C’s consensus-based approach. But in this case the editor’s unilateral decision is fundamentally at odds with the most important HTML design principle, the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone.

The specifier (Hixie) is riding roughshod over the concerns of authors.

I’m particularly concerned by the uncharacteristically muddy thinking behind Hixie’s decision. There are two separate issues here:

  1. Is the time element useful?
  2. Do we need a more general data element?

Hixie conflates these two questions.

He begins by bizarrely making the claim that time hasn’t had much uptake. This is demonstrably false. It’s already shipping in Drupal builds and Wordpress templates as well as having parser support in services and at least one browser. If anything, time is one of the more commonly used and understood elements in HTML5.

There’s a very good reason for that: it fulfils a need that authors have had for a long time—the ability to make a timestamp that is human and machine-readable at the same time. That’s the use case that the microformats community has been trying to solve with the abbr design pattern and the value-class pattern. Those solutions are okay, but not nearly as elegant and intuitive as having a dedicated time element.

Crucially the time element didn’t just specify the mechanism for encoding a machine-readable timestamp, it also defined the format, namely a subset of ISO 8601. That’s exactly the kind of unambiguous documentation that makes a specification useful.

Hixie correctly points out that there are cases for human and machine-readable data other than dates and times. He incorrectly jumps to the conclusion that the time is therefore a failure.

I think he’s right that there probably should be a dedicated element for marking up this kind of data. We already have the meta element but the fact that it’s a standalone element makes it tricky to explicitly associate the human-readable text. So the introduction of a new data may very well turn out to be a good idea. But it does not need to be introduced at the expense of the more specific time element.

I think there’s a comparison to be made with sectioning content. We’ve got the generic section element but then we also have the more specific nav, aside and article elements that can be thought of specialised forms of section. By Hixie’s logic, there’s no reason to have nav, aside and article when a section element has the same effect on the outlining algorithm. But we have those elements because they cover very common use cases.

Hixie’s reductio ad absurdum argument is that if there is a special element for timestamps, we should also have an element for every other possible piece of machine-readable data. If that’s the case we should also have an infinite amount of sectioning content elements: blogpost, comment, chapter, etc. Instead we have just the four elements—section, article, aside and nav—because they represent the most common use cases …perhaps 80%?

The use case that the time element satisfies (human and machine-readable timestamps) is very, very common. The actual mechanism may vary (the time element itself, the abbr pattern, etc.) but the cow path is very much in need of paving.

There’s also a disturbingly Boolean trait to Hixie’s logic. He lumps all machine-readable data into the same bucket. If he had paid attention to Tantek’s research on abstracting microformat vocabularies, he would have seen that it’s a bit more fine-grained than that. There’s a difference between straightforward name/value pairs (like fn or summary), URL values (like photo) and timestamps (like dtstart). The thing that distinguishes timestamps is that they have an existing predefined machine-readable format.

To strengthen his position Hixie introduced two strawmen into the discussion, claiming that the time element was intended to allow easier styling of dates and times with CSS and to also allow conversion of HTML documents into Atom. Those use cases are completely tangential to the fundamental reason for the existence of the time element. Hixie seems to have forgotten that he himself once had it enshrined in the spec that the purpose of the element was to allow users to add events to their calendars. I pointed out that this was an example usage of the more general pattern of machine-readable dates and times so the text of the spec was updated accordingly:

This element is intended as a way to encode modern dates and times in a machine-readable way so that, for example, user agents can offer to add birthday reminders or scheduled events to the user’s calendar.

No mention of styling. No mention of converting documents to Atom.

I’m not the only one who is perplexed by Hixie’s bullheaded behaviour. Steve Faulkner has entered a revert request on the W3C side of things (he’s a braver man than me: the byzantine W3C process scares me off). Ben summed up the situation nicely on Twitter. You can find plenty of other reactions on Twitter by searching for the hashtag #occupyhtml5. Bruce has written down his thoughts and the follow-on comments are worth reading. This reaction from Stephanie is both heartbreaking and completely understandable:

I have been pretty frustrated by this change. In fact, when I read the entire thread on the debacle, it nearly made me want to give up on teaching, and using HTML5. What’s the point when there’s really no discussion? A single person brings something up. Great arguments are made. And then he just does what he originally wanted to do anyway.

I’d like to think that a concerted campaign could sway Hixie but I don’t hold out much hope. Usually there’s only one way to get through to him and that’s by presenting data. Rightly so. In this case however, Hixie is ignoring the data completely. He’s also wilfully violating the fundamental design principles behind HTML5.

So what can we do? Well, just as with the incorrectly-defined semantics of the cite element we can make a stand and simply carry on using the time element in our web pages. If we do, then we’ll see more parsers and browsers implementing support for the time element. The fact that our documentation has been ripped away makes this trickier but it’s such a demonstrably useful addition to HTML that we cannot afford to throw it away based on the faulty logic of one person.

Hixie once said:

The reality is that the browser vendors have the ultimate veto on everything in the spec, since if they don’t implement it, the spec is nothing but a work of fiction. So they have a lot of influence—I don’t want to be writing fiction, I want to be writing a spec that documents the actual behaviour of browsers.

Keep using the time element.