Journal tags: text

23

sparkline

Backup

I’m standing on a huge stage in a giant hangar-like room already filled with at least a thousand people. More are arriving. I’m due to start speaking in a few minutes. But there’s a problem with my laptop. It connects to the external screen, then disconnects, then connects, then disconnects. The technicians are on the stage with me, quickly swapping out adaptors and cables as they try to figure out a fix.

This is a pretty standard stress dream for me. Except this wasn’t a dream. This was happening for real at the giant We Are Developers World Congress in Berlin last week.

In the run-up to the event, the organisers had sent out emails about providing my slide deck ahead of time so it could go on a shared machine. I understand why this makes life easier for the people running the event, but it can be a red flag for speakers. It’s never quite the same as presenting from your own laptop with its familiar layout of the presentation display in Keynote.

Fortunately the organisers also said that I could present from my own laptop if I wanted to so that’s what I opted for.

One week before the talk in Berlin I was in Amsterdam for CSS Day. During a break between talks I was catching up with Michelle. We ended up swapping conference horror stories around technical issues (prompted by some of our fellow speakers having issues with Keynote on the brand new M1 laptops).

Michelle told me about a situation where she was supposed to be presenting from her own laptop, but because of last-minute technical issues, all the talks were being transferred to a single computer via USB sticks.

“But the fonts!” I said. “Yes”, Michelle responded. Even though she had put the fonts on the USB stick, things got muddled in the rush. If you open the Keynote file before installing the fonts, Keynote will perform font substitution and then it’s too late. This is exactly what happened with Michelle’s code examples, messing them up.

“You know”, I said, “I was thinking about having a back-up version of my talks that’s made entirely out of images—export every slide as an image, then make a new deck by importing all those images.”

“I’ve done that”, said Michelle. “But there isn’t a quick way to do it.”

I was still thinking about our conversation when I was on the Eurostar train back to England. I had plenty of time to kill with spotty internet connectivity. And that huge Berlin event was less than a week away.

I opened up the Keynote file of the Berlin presentation. I selected File, Export to, Images.

Then I created a new blank deck ready for the painstaking work that Michelle had warned me about. I figured I’d have to drag in each image individually. The presentation had 89 slides.

But I thought it was worth trying a shortcut first. I selected all of the images in Finder. Then I dragged them over to the far left column in Keynote, the one that shows the thumbnails of all the slides.

It worked!

Each image was now its own slide. I selected all 89 slides and applied my standard transition: a one second dissolve.

That was pretty much it. I now had a version of my talk that had no fonts whatsoever.

If you’re going to try this, it works best if don’t have too many transitions within slides. Like, let’s say you’ve got three words that you introduce—by clicking—one by one. You could have one slide with all three words, each one with its own build effect. But the other option is to have three slides: each one like the previous slide but with one more word added. If you use that second technique, then the exporting and importing will work smoothly.

Oh, and if you have lots and lots of notes, you’ll have to manually copy them over. My notes tend to be fairly minimal—a few prompts and the occasional time check (notes that say “5 minutes” or “10 minutes” so I can guage how my pacing is going).

Back to that stage in Berlin. The clock is ticking. My laptop is misbehaving.

One of the other speakers who will be on later in the day was hoping to test his laptop too. It’s Håkon. His presentation includes in-browser demos that won’t work on a shared machine. But he doesn’t get a chance to test his laptop just yet—my little emergency has taken precedent.

“Luckily”, I tell him, “I’ve got a backup of my presentation that’s just images to avoid any font issues.” He points out the irony: we spent years battling against the practice of text-as-images on the web and now here we are using that technique once again.

My laptop continues to misbehave. It connects, it disconnects, connects, disconnects. We’re going to have to run the presentation from the house machine. I’m handed a USB stick. I put my images-only version of the talk on there. I’m handed a clicker (I can’t use my own clicker with the house machine). I’m quickly ushered backstage while the MC announces my talk, a few minutes behind schedule.

It works. It feels a little strange not being able to look at my own laptop, but the on-stage monitors have the presentation display including my notes. The unfamiliar clicker feels awkward but hopefully nobody notices. I deliver my talk and it seems to go over well.

I think I’ll be making image-only versions of all my talks from now on. Hopefully I won’t ever need them, but just knowing that the backup is there is reassuring.

Mind you, if you’re the kind of person who likes to fiddle with your slides right up until the moment of presenting, then this technique won’t be very useful for you. But for me, not being able to fiddle with my slides after a certain point is a feature, not a bug.

When should there be a declarative version of a JavaScript API?

I feel like it’s high time I revived some interest in my proposal for button type="share". Last I left it, I was gathering use cases and they seem to suggest that the most common use case for the Web Share API is sharing the URL of the current page.

If you want to catch up on the history of this proposal, here’s what I’ve previously written:

Remember, my proposal isn’t to replace the JavaScript API, it’s to complement it with a declarative option. The declarative option doesn’t need to be as fully featured as the JavaScript API, but it should be able to cover the majority use case. I think this should hold true of most APIs.

A good example is the Constraint Validation API. For the most common use cases, the required attribute and input types like “email”, “url”, and “number” have you covered. If you need more power, reach for the JavaScript API.

A bad example is the Geolocation API. The most common use case is getting the user’s current location. But there’s no input type="geolocation" (or button type="geolocation"). Your only choice is to use JavaScript. It feels heavy-handed.

I recently got an email from Taylor Hunt who has come up with a good litmus test for JavaScript APIs that should have a complementary declarative option:

I’ve been thinking about how a lot of recently-proposed APIs end up having to deal with what Chrome devrel’s been calling the “user gesture/activation budget”, and wondering if that’s a good indicator of when something should have been HTML in the first place.

I think he’s onto something here!

Think about any API that requires a user gesture. Often the documentation or demo literally shows you how to generate a button in JavaScript in order to add an event handler to it in order to use the API. Surely that’s an indication that a new button type could be minted?

The Web Share API is a classic example. You can’t invoke the API after an event like the page loading. You have to invoke the API after a user-initiated event like, oh, I don’t know …clicking on a button!

The Fullscreen API has the same restriction. You can’t make the browser go fullscreen unless you’re responding to user gesture, like a click. So why not have button type="fullscreen" in HTML to encapsulate that? And again, the fallback in non-supporting browsers is predictable—it behaves like a regular button—so this is trivial to polyfill. I should probably whip up a polyfill to demonstrate this.

I can’t find a list of all the JavaScript APIs that require a user gesture, but I know there’s more that I’m just not thinking of. I’d love to see if they’d all fit this pattern of being candidates for a new button type value.

The only potential flaw in this thinking is that some APIs that require a user gesture might also require a secure context (either being served over HTTPS or localhost). But as far as I know, HTML has never had the concept of features being restricted by context. An element is either supported or it isn’t.

That said, there is some prior art here. If you use input type="password" in a non-secure context—like a page being served over HTTP—the browser updates the interface to provide scary warnings. Perhaps browsers could do something similar for any new button types that complement secure-context JavaScript APIs.

Prediction

Arthur C. Clarke once said:

Trying to predict the future is a discouraging and hazardous occupation becaue the profit invariably falls into two stools. If his predictions sounded at all reasonable, you can be quite sure that in 20 or most 50 years, the progress of science and technology has made him seem ridiculously conservative. On the other hand, if by some miracle a prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched, that everybody would laugh him to scorn.

But I couldn’t resist responding to a recent request for augery. Eric asked An Event Apart speakers for their predictions for the coming year. The responses have been gathered together and published, although it’s in the form of a PDF for some reason.

Here’s what I wrote:

This is probably more of a hope than a prediction, but 2021 could be the year that the ponzi scheme of online tracking and surveillance begins to crumble. People are beginning to realize that it’s far too intrusive, that it just doesn’t work most of the time, and that good ol’-fashioned contextual advertising would be better. Right now, it feels similar to the moment before the sub-prime mortgage bubble collapsed (a comparison made in Tim Hwang’s recent book, Subprime Attention Crisis). Back then people thought “Well, these big banks must know what they’re doing,” just as people have thought, “Well, Facebook and Google must know what they’re doing”…but that confidence is crumbling, exposing the shaky stack of cards that props up behavioral advertising. This doesn’t mean that online advertising is coming to an end—far from it. I think we might see a golden age of relevant, content-driven advertising. Laws like Europe’s GDPR will play a part. Apple’s recent changes to highlight privacy-violating apps will play a part. Most of all, I think that people will play a part. They will be increasingly aware that there’s nothing inevitable about tracking and surveillance and that the web works better when it respects people’s right to privacy. The sea change might not happen in 2021 but it feels like the water is beginning to swell.

Still, predicting the future is a mug’s game with as much scientific rigour as astrology, reading tea leaves, or haruspicy.

Much like behavioural advertising.

Authentication

Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.

The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:

  • Something you know (like a password),
  • Something you have (like a phone or a USB key),
  • Something you are (biometric Black Mirror shit).

Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).

None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.

The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).

Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.

Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).

There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.

In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.

But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.

With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.

If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete attribute with a value of “one-time-code”. For a six-digit passcode, your input element might look something like this:

<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">

With one small addition to one HTML element, you’ve saved users some tedious drudgery.

There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.

Let’s say your website is example.com and the text message you send reads:

Your one-time passcode is 123456.

Add this to the end of the text message:

@example.com #123456

So the full message reads:

Your one-time passcode is 123456.

@example.com #123456

The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code" in your form and the user shouldn’t have to lift a finger.

I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.

It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.

Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known directory on web servers.

You can add a URL for /.well-known/change-password which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.

Oh, and on that page where users can update their password, the autocomplete attribute is your friend again:

<input type="password" autocomplete="new-password">

If you want them to enter their current password first, use this:

<input type="password" autocomplete="current-password">

All of the things I’ve mentioned—the autocomplete attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.

Clean advertising

Imagine if you were told that fossil fuels were the only way of extracting energy. It would be an absurd claim. Not only are other energy sources available—solar, wind, geothermal, nuclear—fossil fuels aren’t even the most effecient source of energy. To say that you can’t have energy without burning fossil fuels would be pitifully incorrect.

And yet when it comes to online advertising, we seem to have meekly accepted that you can’t have effective advertising without invasive tracking. But nothing could be further from the truth. Invasive tracking is to online advertising as fossil fuels are to energy production—an outmoded inefficient means of getting substandard results.

Before the onslaught of third party cookies and scripts, online advertising was contextual. If I searched for property insurance, I was likely to see an advertisement for property insurance. If I was reading an article about pet food, I was likely to be served an advertisement for pet food.

Simply put, contextual advertising ensured that the advertising that accompanied content could be relevant and timely. There was no big mystery about it: advertisers just needed to know what the content was about and they could serve up the appropriate advertisement. Nice and straightforward.

Too straightforward.

What if, instead of matching the advertisement to the content, we could match the advertisement to the person? Regardless of what they were searching for or reading, they’d be served advertisements that were relevant to them not just in that moment, but relevant to their lifestyles, thoughts and beliefs? Of course that would require building up dossiers of information about each person so that their profiles could be targeted and constantly updated. That’s where cross-site tracking comes in, with third-party cookies and scripts.

This is behavioural advertising. It has all but elimated contextual advertising. It has become so pervasive that online advertising and behavioural advertising have become synonymous. Contextual advertising is seen as laughably primitive compared with the clairvoyant powers of behavioural advertising.

But there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

First of all, it relies on mind-reading powers by the advertising brokers—Facebook, Google, and the other middlemen of ad tech. For all the apocryphal folk tales of spooky second-guessing in online advertising, it mostly remains rubbish.

Forget privacy: you’re terrible at targeting anyway:

None of this works. They are still trying to sell me car insurance for my subway ride.

Have you actually paid attention to what advertisements you’re served? Maciej did:

I saw a lot of ads for GEICO, a brand of car insurance that I already own.

I saw multiple ads for Red Lobster, a seafood restaurant chain in America. Red Lobster doesn’t have any branches in San Francisco, where I live.

Finally, I saw a ton of ads for Zipcar, which is a car sharing service. These really pissed me off, not because I have a problem with Zipcar, but because they showed me the algorithm wasn’t even trying. It’s one thing to get the targeting wrong, but the ad engine can’t even decide if I have a car or not! You just showed me five ads for car insurance.

And yet in the twisted logic of ad tech, all of this would be seen as evidence that they need to gather even more data with even more invasive tracking and surveillance.

It turns out that bizarre logic is at the very heart of behavioural advertising. I highly recommend reading the in-depth report from The Correspondent called The new dot com bubble is here: it’s called online advertising:

It’s about a market of a quarter of a trillion dollars governed by irrationality.

The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tracker-driven behavioural advertising is good for the middlemen doing the tracking. Facebook and Google are two of the biggest players here. But that doesn’t mean that their business models need to be permanently anchored to surveillance. The very monopolies that make them kings of behavioural advertising—the biggest social network and the biggest search engine—would also make them titans of contextual advertising. They could pivot from an invasive behavioural model of advertising to a privacy-respecting contextual advertising model.

The incumbents will almost certainly resist changing something so fundamental. It would be like expecting an energy company to change their focus from fossil fuels to renewables. It won’t happen quickly. But I think that it may eventually happen …if we demand it.

In the meantime, we can all play our part. Just as we can do our bit for the environment at an individual level by sorting our recycling and making green choices in our day to day lives, we can all do our bit for the web too.

The least we can do is block third-party cookies. Some browsers are now doing this by default. That’s good.

Blocking third-party JavaScript is a bit trickier. That requires a browser extension. Most of these extensions to block third-party tracking are called ad blockers. That’s a shame. The issue is not with advertising. The issue is with tracking.

Alas, because this software is labelled under ad blocking, it has led to the ludicrous situation of an ethical argument being made to allow surveillance and tracking! It goes like this: websites need advertising to survive; if you block the ads, then you are denying these sites revenue. That argument would make sense if we were talking about contextual advertising. But it makes no sense when it comes to behavioural advertising …unless you genuinely believe that online advertising has to be behavioural, which means that online advertising has to track you to be effective. Such a belief would be completely wrong. But that doesn’t stop it being widely held.

To argue that there is a moral argument against blocking trackers is ridiculous. If anything, there’s a moral argument to be made for installing anti-tracking software for yourself, your friends, and your family. Otherwise we are collectively giving up our privacy for a business model that doesn’t even work.

It’s a shame that advertisers will lose out if tracking-blocking software prevents their ads from loading. But that’s only going to happen in the case of behavioural advertising. Contextual advertising won’t be blocked. Contextual advertising is also more lightweight than behavioural advertising. Contextual advertising is far less creepy than behavioural advertising. And crucially, contextual advertising works.

That shouldn’t be a controversial claim: the idea that people would be interested in adverts that are related to the content they’re currently looking at. The greatest trick the ad tech industry has pulled is convincing the world that contextual relevance is somehow less effective than some secret algorithm fed with all our data that’s supposed to be able to practically read our minds and know us better than we know ourselves.

Y’know, if this mind-control ray really could give me timely relevant adverts, I might possibly consider paying the price with my privacy. But as it is, YouTube still hasn’t figured out that I’m not interested in Top Gear or football.

The next time someone is talking about the necessity of advertising on the web as a business model, ask for details. Do they mean contextual or behavioural advertising? They’ll probably laugh at you and say that behavioural advertising is the only thing that works. They’ll be wrong.

I know it’s hard to imagine a future without tracker-driven behavioural advertising. But there are no good business reasons for it to continue. It was once hard to imagine a future without oil or coal. But through collective action, legislation, and smart business decisions, we can make a cleaner future.

Saving forms

I added a long-overdue enhancement to The Session recently. Here’s the scenario…

You’re on a web page with a comment form. You type your well-considered thoughts into a textarea field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.

This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.

My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage, IndexedDB, localStorage. It was clear that sessionStorage wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB or localStorage. IndexedDB is the more versatile and powerful—because it’s asynchronous—but localStorage is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.

Alright, so I’m going to store the contents of a form in localStorage. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea. I can store other form fields too. Even though localStorage technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).

Now I know what I’m going to store (the textarea contents) and how I’m going to store it (localStorage). The next question is when should I do it?

I could play it safe and store the comment whenever the user presses a key within the textarea. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.

Alright then, I’ll use the unload event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.

So beforeunload then? Well, maybe. But modern browsers also support a pagehide event that looks like a better option.

In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea. And in order to know if the user has interacted with the textarea, I’m back to listening for key presses again.

But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea.

Handily, addEventListener accepts an object of options. One of those options is called “once”. If I set that to true, then the event listener is only fired once.

So I set up a cascade of event listeners. If the user types anything into the textarea, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea contents are put into localStorage.

I’ve abstracted my code into a gist. Here’s what it does:

  1. Cut the mustard. If this browser doesn’t support localStorage, bail out.
  2. Set the localStorage key to be the current URL.
  3. If there’s already an entry for the current URL, update the textarea with the value in localStorage.
  4. Write a function to store the contents of the textarea in localStorage but don’t call the function yet.
  5. The first time that a key is pressed inside the textarea, start listening for the page being unloaded.
  6. When the page is being unloaded, invoke that function that stores the contents of the textarea in localStorage.
  7. When the form is submitted, remove the entry in localStorage for the current URL.

That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.

Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Designing the Patterns Day site

Patterns Day is not one of Clearleft’s slick’n’smooth conferences like dConstruct or UX London. It’s more of a spit’n’sawdust affair, like Responsive Day Out.

You can probably tell from looking at the Patterns Day website that it wasn’t made by a crack team of designers and developers—it’s something I threw together over the course of a few days. I had a lot of fun doing it.

I like designing in the browser. That’s how I ended up designing Resilient Web Design, The Session, and Huffduffer back in the day. But there’s always the initial problem of the blank page. I mean, I had content to work with (the information about the event), but I had no design direction.

My designery colleagues at Clearleft were all busy on client projects so I couldn’t ask any of them to design a website, but I thought perhaps they’d enjoy a little time-limited side exercise in producing ideas for a design direction. Initially I was thinking they could all get together for a couple of hours, lock themselves in a room, and bash out some ideas as though it were a mini hack farm. Coordinating calendars proved too tricky for that. So Jon came up with an alternative: a baton relay.

Remember Layer Tennis? I once did the commentary for a Layer Tennis match and it was a riot—simultaneously terrifying and rewarding.

Anyway, Jon suggested something kind of like that, but instead of a file being batted back and forth between two designers, the file would passed along from designer to designer. Each designer gets one art board in a Sketch file. You get to see what the previous designers have done, leaving you to either riff on that or strike off in a new direction.

The only material I supplied was an early draft of text for the website, some photos of the first confirmed speakers, and some photos I took of repeating tiles when I was in Porto (patterns, see?). I made it clear that I wasn’t looking for pages or layouts—I was interested in colour, typography, texture and “feel.” Style tiles, yes; comps, no.

Jon

Jon’s art board.

Jon kicks things off and immediately sets the tone with bright, vibrant colours. You can already see some elements that made it into the final site like the tiling background image of shapes, and the green-bordered text block. There are some interesting logo ideas in there too, some of them riffing on LEGO, others riffing on illustrations from Christopher Alexander’s book, A Pattern Language. Then there’s the typeface: Avenir Next. I like it.

James G

James G’s art board.

Jimmy G is up next. He concentrates on the tiles idea. You can see some of the original photos from Porto in the art board, alongside his abstracted versions. I think they look great, and I tried really hard to incorporate them into the site, but I couldn’t quite get them to sit with the other design elements. Looking at them now, I still want to get them into the site …maybe I’ll tinker with the speaker portraits to get something more like what James shows here.

Ed

Ed’s art board.

Ed picks up the baton and immediately iterates through a bunch of logo ideas. There’s something about the overlapping text that I like, but I’m not sure it fits for this particular site. I really like the effect of the multiple borders though. With a bit more time, I’d like to work this into the site.

James B

Batesy’s art board.

Batesy is the final participant. He has some other nice ideas in there, like the really subtle tiling background that also made its way into the final site (but I’ll pass on the completely illegible text on the block of bright green). James works through two very different ideas for the logo. One of them feels a bit too busy and chaotic for me, but the other one …I like it a lot.

I immediately start thinking “Hmm …how could I make this work in a responsive way?” This is exactly the impetus I needed. At this point I start diving into CSS. Not only did I have some design direction, I’m champing at the bit to play with some of these ideas. The exercise was a success!

Feel free to poke around the Patterns Day site. And while you’re there, pick up a ticket for the event too.

Talking about hypertext

#CSSday starts off with a great history lesson of our industry by @adactio

I’ve just published a transcript of the talk I gave at the HTML Special that preceded CSS Day a couple of weeks back. I’ve also recorded an audio version for your huffduffing pleasure.

It’s not like the usual talks I give. The subject matter was assigned to me, Mission Impossible style. PPK wanted each speaker to give an entire talk on just one HTML element. He offered me the best element of them all: the A element.

There were a few different directions I could’ve taken it. I could’ve tried to make it practical, but I quickly dismissed that idea. Instead I went in the completely opposite direction, making it as pretentious as possible. I figured a talk about hypertext could afford to be winding and circuitous, building on some of the ideas I wrote about in my piece for The Manual a few years back. It’s quite self-indulgent of me, but I used it as an opportunity to geek out about some of my favourite things; from Borges, Babbage, and Bletchley to Leibniz, Lovelace, and Licklider.

I wouldn’t usually write out an entire talk word-for-word in advance, but somehow it felt right for this one. In fact, my talk preparation this time ‘round was very similar to the process Charlotte recently wrote about:

  1. Get everything out of my head and onto a mind map.
  2. Write chunks of content in short bursts—this was when I was buddying up with Paul.
  3. Put together a slide deck of visuals to support the narrative.
  4. Practice delivering the talk so I don’t look I’m just reading off a screen.

It takes me a long time to prepare talks. As the deadline for this one approached, I was getting quite panicked. It was touch and go there for a while, but I managed to get it done in time.

I’m pleased with how it turned out. On the day, I had fun delivering it. People seemed to like it too, which was gratifying.

Although with this kind of talk, it was inevitable that I wouldn’t be able to please everyone.

I guess this talk was a one-off affair. That said, if you’re putting on an event and you think this subject matter would be appropriate, let me know. I’d be more than happy to deliver it again.

Taking an online book offline

Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.

There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.

Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.

I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.

Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to Let’s Encrypt Certbot—but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.

In this case, the Service Worker is using a straightforward bit of logic:

  • On installation, cache absolutely everything: HTML, CSS, images.
  • When anything is requested, grab it from the cache.
  • If it isn’t in the cache, try the network.
  • If the network doesn’t work, show an offline page (or image).

Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.

Which brings me back to the way AppCache is being deprecated…

The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.

In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…

At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”

You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:

In that same Mozilla thread, Soledad echoes Tom’s point:

As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.

I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…

Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.

I was genuinely shocked by this:

Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use display: browser to allow people to share URLs.

Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.

There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.

I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.

I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.

Relinkification

On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.

Manual

I wrote a thing. I wrote it for The Manual, Andy’s rather lovely tertial dead-tree publication.

The thing I wrote is punnily titled As We May Link (and if you get the reference to what I’m punning on, I think you’re going to like it). It’s quite different to most of the other articles I’ve written. My usual medium is hypertext, but I knew that The Manual was going to be published by pressing solutions of pigments onto thinly-sliced pieces of vegetation.

There’s a lot of fetishisation around that particular means of production, often involving the emotional benefit of consuming the final product in the bathtub, something I’ve never understood—surely water is as unsuitable an environment for paper-based analogue systems as it is for silicon-based digital devices?

In any case, I wanted to highlight the bound—in both senses of the word—nature of The Manual’s medium. So I wrote about hypertext …but without being able to use any hyperlinks; an exercise in dancing about architecture, as whoever the hell it was so eloquently put it.

I’m pleased with how it turned out, though I suspect that’s entirely down to Carolyn’s editing skills combined with a lovely illustration by Rob Bailey.

If you’d like to read what I wrote (and what Tiffani wrote and what Ethan wrote and what all the other contributors wrote), then you can order The Manual online …but you’ll have to wait until it is delivered to you over a network of roads and other meatspace transportation routes. The latency is terrible, but the bandwidth is excellent: when you finally have the book in your hands, you’ll find that it contains an astonishing number of atoms.

That’s assuming there’s no packet loss.

Clarification

I feel I need to clarify my last post and make a general point about One Web.

When I say…

we need to think about publishing content that people want while adapting the display of that content according to the abilities of the person’s device.

…I do not mean we need to adjust the layout of existing desktop sites for other devices. Quite the opposite. I mean we need to stop building desktop-specific sites.

I think there’s a misconception that responsive web design is a “get out of jail free” card: instead of designing for different devices, all you have to do is reflow your existing pages so that they fit fine on any screen, right?

Wrong.

If you have a desktop-specific site—and, let’s face it, that covers over 90% of the web—the first step is to replace it with a device-agnostic site. Not mobile-specific, not desktop-specific, not tablet-specific, but centred instead around the person visiting the site and the content that they want.

That’s what I meant when I said:

Most of the time, creating a separate mobile website is simply a cop-out.

It’s an acknowledgement that the existing desktop-specific site is too bloated and crufty.

Luke’s idea of Mobile First is a good thought exercise to start designing from the content out, but the name is a little misleading. It could just as easily be Print First or Any-Device-Other-Than-The-Desktop First.

Here’s what I’m getting at: we act as though mobile is a new problem, and that designing for older devices—like desktop and laptop computers—is a solved problem. I’m saying that the way we’ve been designing for the desktop is fundamentally flawed. Yes, mobile is a whole new domain, but what it really does is show just how bad our problem-solving has been up ‘till now.

Further reading: It’s About People, Not Devices by Bryan and Stephanie Rieger.

Context

I swear there’s some kind of quantum entanglement going on between Ethan’s brain and mine. Demonstrating spooky action at a distance, just as I was jotting down my half-assed caveat related to responsive design, he publishes a sharp and erudite explanation of what responsive design is and isn’t attempting to do. He uses fancy learnin’ words and everything:

When I’m speaking or writing about responsive design, I try to underline something with great, big, Sharpie-esque strokes: responsive design is not about “designing for mobile.” But it’s not about “designing for the desktop,” either. Rather, it’s about adopting a more flexible, device-agnostic approach to designing for the web. Fluid grids, flexible images, and media queries are the tools we use to get a bit closer to that somewhat abstract-sounding philosophy. And honestly, a more unified, less fragmented approach resonates with my understanding of the web on a fairly profound level.

Meanwhile Mark has written a beautiful encapsulation of the sea change that responsive design is a part of:

Embrace the fluidity of the web. Design layouts and systems that can cope to whatever environment they may find themselves in. But the only way we can do any of this is to shed ways of thinking that have been shackles around our necks. They’re holding us back.

Start designing from the content out, rather than the canvas in.

Both Ethan and Mark are writing books. I can’t wait to get my hands on them.

In the meantime, I wanted to take an opportunity to clear up some misunderstandings I keep seeing coming up again and again in relation to responsive web design. So put the kettle on and make a nice cup of tea while I try to gather my thoughts into some kind of coherence.

Breaking it down

To paraphrase , web design is filled with quite a few known unknowns. Here are three of them:

  1. Viewport: the dimensions of the browser that a person uses to access your content.
  2. Bandwidth: the speed of the network connection that a person uses to access your content.
  3. Context: the environment from which a person accesses your content.

Viewport

With the proliferation of mobile devices, tablets and every other kind of browsing device imaginable, there’s a high number of possible viewport sizes. But here’s the thing: that’s always been the case.

For over a decade, we have pretended that there’s a mythical perfect size that every person will be using. To start, that size was 640x480, then it was 800x600, then 1024x768 …but this magical ideal dimension was always a phantom. People have always been visiting our websites with browsers open to varying dimensions of width and height—the rise of “mobile” has simply thrown that fact into sharp relief.

The increasing proliferation of different-sized devices and browsers means that we can no longer cling to the consensual hallucination of the “ideal” viewport size.

Fortunately this is a solved problem. Liquid layouts were a good first step. Once you add media queries into the mix it’s possible to successfully deal with a wide range of viewport sizes.

Simply put, responsive web design solves the viewport question.

But that’s just one of three issues.

Bandwidth

Using either media queries or JavaScript, we can test for a person’s viewport size and adapt our layouts accordingly; there is no equivalent test for the speed of a person’s network connection.

This sucks.

The Filament Group are experimenting with responsive images and I hope to see a lot more experimentation in this area. But when it comes to serving up different-sized media to different people, we are forced to make an assumption. The assumption is that a small viewport equates to narrow bandwidth.

‘Tain’t necessarily so. If I’m using my iPod Touch I’m surfing with a fairly small screen but I’m not doing it over 3G or Edge—same goes for anyone idly browsing on the iPhone or iPad on their work or home connection.

Likewise, just because I’m using my laptop doesn’t mean I’m connected with a fat pipe. When I took the train from Seattle to Portland there was WiFi available …of a sort. And many’s the hotel connection that pushes the boundaries of advertising itself as “high speed.”

Once again, the “solution” to this problem for the past decade has been to ignore it. Just as with viewport size, we engage in a consensual hallucination of ideal bandwidth. Just as with viewport size, the proliferation of new devices is highlighting a problem that was always there. Unlike viewport size, the bandwidth issue is a much tougher nut to crack.

Responsive web design doesn’t directly solve the bandwidth question. I suspect that the solutions will involve a mixture of server-side and client-side trickery, most likely involving clever for nice-to-have content. I’ll be keeping an eye on the work of Steve Souders.

In the meantime, the best we can do is stop assuming a best-case scenario for bandwidth.

Context

You don’t know what a person is doing when they visit your website. It’s possible to figure out what viewport size they are accessing your content with and it might even be possible to figure out how fast their network connection is but short of clairvoyance, there’s no way of knowing whether someone is in a hurry or looking to spend some time hanging out on your site.

Once again, this has always been the case. Once again, we have up ‘till now ignored the problem by pretending the person visiting our website—the same person with the perfect viewport size and the fast internet connection—doesn’t mind being served up dollops and dollops of so-called “content”, very little of which is directly relevant to them.

The rise of services like Readability and Safari’s Reader mode demonstrate that the overabundance of page cruft is being interpreted as damage and routed around.

The context problem—figuring out what a person is doing at the moment they visit a site—is really, really hard.

Responsive web design does not solve the context problem. It doesn’t even attempt to. The context problem is a very different issue to the viewport problem—which responsive web design does solve. As Mark put it:

It’s making sure your layout doesn’t look crap on diff. sized screens. Nothing more.

He was responding to Brian and Kevin who I think may have misunderstood the problems that responsive web design is trying to solve. Brian wrote:

anyone that claims “responsive design” as a best practice clearly has never actually tried to support multiple contexts or devices.

Those are two different issues: contexts and devices. The device issue breaks down into viewport size and bandwidth. Responsive design is certainly a best practice when tackling the viewport issue. But Brian’s right: responsive design does not solve the problem of different contexts. Nor did it ever claim to.

As I’ve said before:

The choice is not between using media queries and creating a dedicated mobile site; the choice is between using media queries and doing nothing at all.

If responsive design were being sold as a solution to the context problem, I too would be annoyed. But that’s not the case.

The mythical mobile context

As with the viewport issue and the bandwidth issue, the context issue—which has always been there—is now at the fore with the rise of mobile devices. As well as trying to figure out what a person wants when they visit a website, we now have to think about where they are, where they are going and where they have just been.

This is by far the toughest problem.

Bizarrely, this is the very known unknown that I see addressed as though it were solved. “Someone visits your site with a mobile device therefore they are in a rush, walking down the street, hurriedly trying to find your phone number!”

Really?

The data does not support this. All those people with mobile devices sitting on a train or sitting in a cafe or lounging on the sofa at home; they are all in a very different context to the imaginary persona of the mobile user rushing hither and thither.

We have once again created a consensual hallucination. Just as we generated a mythical desktop user with the perfect viewport size, a fast connection and an infinite supply of attention, we have now generated a mythical mobile user who has a single goal and no attention span.

More and more people are using mobile devices as a primary means of accessing the web. The number is already huge and it’s increasing every day. I don’t think they’re all using their devices just to look up the opening times of restaurants. They are looking for the same breadth and richness of experience that they’ve come to expect from the web on other devices.

Hence the frustration with mobile-optimised sites that remove content that’s available on the desktop-optimised version.

Rather than creating one site for an imaginary desktop user and another for an imaginary mobile user, we need to think about publishing content that people want while adapting the display of that content according to the abilities of the person’s device. That’s why I’m in favour of universal design and the One Web approach.

That’s also why responsive web design can be such a powerful tool. But make no mistake: responsive web design is there to help solve the viewport problem, not the context problem.

Update: Of course the usual caveat applies. Also, here’s some clarification about what I’m sugguesting.

Revving up

I was away in Berlin for a few days, delivering a to the good people at Aperto. I had a good time, made even better by some excellent Spring weather and the opportunity to meet up with Anthony and Colin while I was there.

I came home to find that, in my absence, rev="canonical" usage has gone stratospheric. First off, there are the personal sites like CollyLogic and Bokardo. Then there are the bigger fish:

Excellent! I’d just like to add one piece of advice to anyone implementing or thinking of implementing rev="canonical": if you are visibly linking to the short url of the current page, please remember to use rev="canonical" on that A element as well as on any LINK element you’ve put in the HEAD of your document. Likewise, for the coders out there, if you are thinking of implementing a rev="canonical" parser—and let’s face it, that’s a nice piece of low-hanging fruit to hack together—please remember to also check for rev attributes on A elements as well as on LINK elements. If anything, I would prioritise human-visible claims of canonicity over invisible metacrap.

Actually, there’s a whole bunch of nice metacrapital things you can do with your visible hyperlinks. If you link to an RSS feed in the BODY of your document, use the same rel values that you would use if you linked to the feed from a LINK element in the HEAD. If you link to an MP3 file, use the type attribute to specify the right mime-type (audio/mpeg). The same goes for linking to Word documents, PDFs and any other documents that aren’t served up with a mime-type of text/html. So, for example, here on my site, when I link to the RSS feed from the sidebar, I’m using type and rel attributes: href="/journal/rss" rel="alternate" type="application/rss+xml". I’m also quite partial to the hreflang attribute but I don’t get the chance to use that very often—this post being an exception.

The rev="canonical" convention makes a nice addition to the stable of nice semantic richness that can be added to particular flavours of hyperlinks. But it isn’t without its critics. The main thrust of the argument against this usage is that the rev attribute currently doesn’t appear in the HTML5 spec. I’ve even seen people use the past tense to refer to an as-yet unfinished specification: the rev attribute was taken out of the HTML5 spec.

As is so often the case with HTML5, the entire justification for dropping rev seems to be based on a decision made by one person. To be fair, the decision was based on available data from 2005. In light of recent activity and the sheer number of documents that are now using rev="canonical"—Flickr alone accounts for millions—I would hope that the HTML5 community will have the good sense to re-evaluate that decision. The document outlining the design principles of HTML5 states:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

The unbelievable speed of adoption of rev="canonical" shows that it fulfils a real need. If the HTML5 community ignore this development, not only would they not be paving a cowpath, they would be refusing to even acknowledge that a well-trodden cowpath even exists.

The argument against rev seems to be that it can be confusing and could result in people using it incorrectly. By that argument, new elements like header and footer should be kept out of any future specification for the same reason. I’ve already come across confusion on the part of authors who thought that these new elements could only be used once per document. Fortunately, the spec explains their meaning.

The whole point of having a spec is to explain the meaning of elements and attributes, be it for authors or user-agents. Without a spec to explain what they mean, elements like P and A don’t make any intuitive sense. It’s no different for attributes like href or rev. To say that rev isn’t a good attribute because it requires you to read the spec is like saying that in order to write English, you need to understand the language. It’s neither a good nor bad thing, it’s just a statement of the bleedin’ obvious.

Now go grab yourself the very handy bookmarklet that Simon has written for auto-discovering short urls.

Shrtr

In one of those instances of convergent online evolution, the subject of URL shorteners has been popping up a lot lately. You know; TinyURL, bit.ly, tr.im, and the like. I suspect a lot of this talk was prompted by the launch of the DiggBar and its accompanying short URL service that serves up your content in an iframe—time to break out that frame-busting JavaScript you haven’t needed for years.

David Weiss writes about the security implications of URL shortening services. Meanwhile, Joshua Schachter talks about the danger of link rot:

The worst problem is that shortening services add another layer of indirection to an already creaky system. A regular hyperlink implicates a browser, its DNS resolver, the publisher’s DNS server, and the publisher’s website. With a shortening service, you’re adding something that acts like a third DNS resolver, except one that is assembled out of unvetted PHP and MySQL, without the benevolent oversight of luminaries like Dan Kaminsky and St. Postel.

Dave Winer agrees:

We need to prepare for the day when N of the URL shorteners go out of business. When that happens a large part of the web will die. It will not be a good day.

Take the case of Twitter. Messages on Twitter are archived and addressable. If those messages contain links, they are shortened using TinyURL. If TinyURL were to disappear, it would leave a swamp of unresolved endpoints. Jason Kottke has a modest proposal:

In cases where shortening is necessary, Twitter should automatically use a shortener of their own. That way, users know what they’re getting and as long as Twitter is around, those links stay alive.

That would definitely work for that particular case. Of course Twitter could disappear, taking its archive of messages with it, but that’s a different situation. The loss of shortened URLs would be tightly coupled to the loss of the original messages.

But Twitter is just one example. What about the rest of us? Right now, if someone wants to pass around a shortened version of one of my URLs, they could use any one of the many URL shortening services out there. The result is potentially a score of different short URLs leading to the same endpoint. If some of those services disappear, link rot spreads.

Ideally, I should be able to specify a desired short URL for my content. This is something that Dopplr is already doing with its dplr.it domain.

Kellan says that they’re also putting together a URL shortener over at Flickr. He’s thinking about how to specify a short URL for a document: some way of specifying here’s the short URL for this page in the same way that we can specify here’s the stylesheet for this page or here’s the RSS feed for this page.

The rel attribute is used for stylesheets and RSS feeds so perhaps that’s the way to go. Something along the lines of rel="alternate shorter" in the same way that we can point to an alternate stylesheet with rel="alternate stylesheet". But in this case, we’re actually pointing to the same resource but with a different URL. So maybe something like rel="alternate shorter self" would be more accurate. Heck, we could probably throw the bookmark value in there too: rel="alternate shorter self bookmark".

Kevin pointed out on Twitter that rev (reverse relationship) would be more suitable than rel.

Google introduced rel="canonical" recently. It’s a way of pointing from an alternate URL back to the canonical URL of the current document: the relationship of the linked document to the current document is “canonical”.

If you’re linking from the canonical URL to an alternate URL (like, say, a shortened URL), you could use rev="canonical": the relationship of the current document to the linked document is “canonical”.

This certainly seems to be the more semantically correct way of pointing to a shortened URL. Alas, rev is a beleaguered little emo attribute: no-one understands it. At least, that’s the claim of the HTML5 community, who plan to drop it completely.

Personally, I share Paul’s intuitions:

HTML is a living language and the HTML5 WG should behave more like the OED rather than the French Government.

So if enough of us publish documents using ARIA roles, accesskey or rev attributes, they will not go gentle into that good night.

Should the idea of distributed, rather than centralised, URL shortening take off, I can imagine a situation where short URL auto-discovery is as commonplace as . So if I paste a link into a microblogging site like Twitter, or choose to “Mail this page” from my browser, then the website or mail client could check the head of the document for a preferred short URL. It’s a little bit like OpenID delegation: I could either create my own URL shortening service or specify a provider I trust.

Josh is already playing around with shortened links back to posts on his blog. Now suppose he also specifies the short URL (using rev="canonical") on those blog posts…

Update: Kellan has now implemented rev="canonical" auto-discovery.

Update 2: …and Dopplr have duly implemented rev="canonical" which works a treat with Kellan’s auto-discovery tool. Here’s an example.

Update 3: This just keeps getting better. Now there’s a blog devoted to rev="canonical" which has already documented not one, but two Wordpress plugins.