Tags: web

240

sparkline

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {
...
}

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
}
body {
  background-color: var(--background-color);
  color: var(--text-color);
}
a {
  color: var(--link-color);
}

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;
  }
}

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%
);

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);
  }
}

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;
}

But you can also do it in HTML:

<meta name="supported-color-schemes" content="light dark">

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:

<picture>
<source media="prefers-color-scheme: dark" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
<img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">
</picture>

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Travel talk

It’s been a busy two weeks of travelling and speaking. Last week I spoke at Finch Conf in Edinburgh, Code Motion in Madrid, and Generate CSS in London. This week I was at Indie Web Camp, View Source, and Fronteers, all in Amsterdam.

The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.

I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.

I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.

The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.

Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.

I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!

Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.

After one day of rest, it was time for Fronteers. This was where myself and Remy gave the joint talk we’ve been working on:

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.

I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.

I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.

After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).

Then there are the places that I can only get to by plane. There’s the United States. I’ll be speaking at An Event Apart in San Francisco in December. A flight is unavoidable. Last time we went to the States, Jessica and I travelled by ocean liner. But that isn’t any better for the environment, given the low-grade fuel burned by ships.

And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.

Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Geneva Copenhagen Amsterdam

Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.

It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.

I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.

So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.

If you’re at Techfestival.co in Copenhagen, drop in to this shipping container where I’ll be demoing WorldWideWeb.cern.ch

“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.

Lots of people entered facebook.com or google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.

People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.

The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.

All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:

The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.

Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.

The Weight of the WWWorld is Up to Us by Patty Toland

It’s Patty Toland’s first time at An Event Apart! She’s from the fantabulous Filament Group. They’re dedicated to making the web work for everyone.

A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.

At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.

A month later there was a lot of coverage of Syrian refugees arriving in Europe. People were outraged to see refugees carrying smartphones as though that somehow showed they weren’t in a desperate situation. But smartphones are absolutely a necessity in that situation, and most of the phones were less expensive, lower-end devices. Refugeeinfo.eu was a useful site for people in crisis, but the navigation was designed to require JavaScript.

When people thing about mobile, they think about freedom and mobility. But with that JavaScript decision, the developers piled baggage on to the users.

There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.

Pew Research provided a lot of data back then that showed that this thinking was wrong. Use of cell phones, especially smartphones and tablets, escalated dramatically in the United States. There was a trend towards mobile-only usage. This was in low-income households—about one third of the population. Among 5,400 panelists, 15% did not have a JavaScript-enabled device.

Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.

There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.

At most 37% of people have unlimited data. Most people run out of data on a frequent basis.

Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.

This shows that network issues are definitely not just a third world challenge.

On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.

Are we helping in or dumping in? If you look at the data for the average web page size (2 megabytes), we are definitely dumping in. The size of third-party JavaScript has octupled.

There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.

Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.

In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.

What should we be doing?

Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.

Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.

Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.

Test on real devices that real people are using. See Alex Russell’s data on the differences between the kind of devices we use and typical low-end devices. We literally need to stop people in JavaScript.

Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.

If you have some power, then your job is to empower somebody else.

—Toni Morrison

Register for Indie Web Camp Brighton 2019

Back at the end of May, I wrote:

We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.

It’s going to be a lot of fun!

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

Check out the previous Brighton Indie Web Camps:

See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!

Discrete replies

Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.

The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).

I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.

At the time, I wrote:

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.

I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.

Trad time

Fifteen years ago, I went to the Willie Clancy Summer School in Miltown Malbay:

I’m back from the west of Ireland. I was sorry to leave. I had a wonderful, music-filled time.

I’m not sure why it took me a decade and a half to go back, but that’s what I did last week. Myself and Jessica once again immersed ourselves in Irish tradtional music. I’ve written up a trip report over on The Session.

On the face of it, fifteen years is a long time. Last time I made the trip to county Clare, I was taking pictures on a point-and-shoot camera. I had a phone with me, but it had a T9 keyboard that I could use for texting and not much else. Also, my hair wasn’t grey.

But in some ways, fifteen years feels like the blink of an eye.

I spent my mornings at the Willie Clancy Summer School immersed in the history of Irish traditional music, with Paddy Glackin as a guide. We were discussing tradition and change in generational timescales. There was plenty of talk about technology, but we were as likely to discuss the influence of the phonograph as the influence of the internet.

Outside of the classes, there was a real feeling of lengthy timescales too. On any given day, I would find myself listening to pre-teen musicians at one point, and septegenarian masters at another.

Now that I’m back in the Clearleft studio, I’m finding it weird to adjust back in to the shorter timescales of working on the web. Progress is measured in weeks and months. Technologies are deemed outdated after just a year or two.

The one bridging point I have between these two worlds is The Session. It’s been going in one form or another for over twenty years. And while it’s very much on and of the web, it also taps into a longer tradition. Over time it has become an enormous repository of tunes, for which I feel a great sense of responsibility …but in a good way. It’s not something I take lightly. It’s also something that gives me great satisfaction, in a way that’s hard to achieve in the rapidly moving world of the web. It’s somewhat comparable to the feelings I have for my own website, where I’ve been writing for eighteen years. But whereas adactio.com is very much focused on me, thesession.org is much more of a community endeavour.

I question sometimes whether The Session is helping or hindering the Irish music tradition. “It all helps”, Paddy Glackin told me. And I have to admit, it was very gratifying to meet other musicians during Willie Clancy week who told me how much the site benefits them.

I think I benefit from The Session more than anyone though. It keeps me grounded. It gives me a perspective that I don’t think I’d otherwise get. And in a time when it feels entirely to right to question whether the internet is even providing a net gain to our world, I take comfort in being part of a project that I think uses the very best attributes of the World Wide Web.

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.

The World-Wide Work

I’ve been to a lot of events and I’ve seen a lot of talks. I find that, even after all this time, I always get something out of every presentation I see. Kudos to anyone who’s got the guts to get up on stage and share their thoughts.

But there are some talks that are genuinely special. When they come along, it’s a real privilege to be in the room. Wilson’s talk, When We Build was one of those moments. There are some others that weren’t recorded, but will always stay with me.

Earlier this year, I had the great honour of opening the New Adventures conference in Nottingham. I definitely felt a lot of pressure, and I did my utmost to set the scene for the day. The final talk of the day was delivered by my good friend Ethan. He took it to another level.

Like I said at the time:

Look, I could gush over how good Ethan’s talk was, or try to summarise it, but there’s really no point. I’ll just say that I felt the same sense of being present at something genuinely important that I felt when I was in the room for his original responsive web design talk at An Event Apart back in 2010. When the video is released, you really must watch it.

Well, the video has been released and you really must watch it. Don’t multitask. Don’t fast forward. Set aside some time and space, and then take it all in.

The subject matter, the narrative structure, the delivery, and the message come together in a unique way.

If, having watched the presentation, you want to dive deeper into any of Ethan’s references, check out the reading list that accompanies the talk.

I mentioned that I felt under pressure to deliver a good opener for New Adventures. I know that Ethan was really feeling the pressure too. He needn’t have worried. He delivered one of the best conference talks I’ve ever seen.

Thank you, Ethan.

Indie web events in Brighton

Homebrew Website Club is a regular gathering of people getting together to tinker on their own websites. It’s a play on the original Homebrew Computer Club from the ’70s. It shares a similar spirit of sharing and collaboration.

Homebrew Website Clubs happen at various locations: London, San Francisco, Portland, Nuremberg, and more. Usually there on every second Wednesday.

I started running Homebrew Website Club Brighton a while back. I tried the “every second Wednesday” thing, but it was tricky to make that work. People found it hard to keep track of which Wednesdays were Homebrew days and which weren’t. And if you missed one, then it would potentially be weeks between attending.

So I’ve made it a weekly gathering. On Thursdays. That’s mostly because Thursdays work for me: that’s one of the evenings when Jessica has her ballet class, so it’s the perfect time for me to spend a while in the company of fellow website owners.

If you’re in Brighton and you have your own website (or you want to have your own website), you should come along. It’s every Thursday from 6pm to 7:30pm ‘round at the Clearleft studio on 68 Middle Street. Add it to your calendar.

There might be a Thursday when I’m not around, but it’s highly likely that Homebrew Website Club Brighton will happen anyway because either Trys, Benjamin or Cassie will be here.

(I’m at Homebrew Website Club Brighton right now, writing this. Remy is here too, working on some very cool webmention stuff.)

There’s something else you should add to your calendar. We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

It’s been a while since we’ve had an Indie Web Camp in Brighton. You can catch up on the Brighton Indie Web Camps we had in 2014, 2015, and 2016. Since then I’ve been to Indie Web Camps in Berlin, Nuremberg, and Düsseldorf, but it’s going to be really nice to bring it back home.

Indie Web Camp UK attendees Indie Web Camp Brighton group photo IndieWebCampBrighton2016

The event will be free to attend, but I’ll set up an official ticket page on Ti.to to keep track of who’s coming. I’ll let you know when that’s up and ready. In the meantime, you can register your interest in attending on the 2019 Indie Webcamp Brighton page on the Indie Web wiki.

Replies

Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

<details>
    <summary>
        <label for="replyto">Reply to</label>
    </summary>
    <input type="url" id="replyto" name="replyto">
</details>

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”?

adactio.com: I would say Falcor from Neverending Story, the big flying dog.

Cool goal

One evening last month, during An Event Apart Seattle, a bunch of the speakers were gathered in the bar in the hotel lobby, shooting the breeze and having a nightcap before the next day’s activities. In a quasi-philosophical mode, the topic of goals came up. Not the sporting variety, but life and career goals.

As I everyone related (confessed?) their goals, I had to really think hard. I don’t think I have any goals. I find it hard enough to think past the next few months, much less form ideas about what I might want to be doing in a decade. But then I remembered that I did once have a goal.

Back in the ’90s, when I was living in Germany and first starting to make websites, there was a website I would check every day for inspiration: Project Cool’s Cool Site Of The Day. I resolved that my life’s goal was to one day have a website I made be the cool site of the day.

About a year later, to my great shock and surprise, I achieved my goal. An early iteration of Jessica’s site—complete with whizzy DHTML animations—was the featured site of the day on Project Cool. I was overjoyed!

I never bothered to come up with a new goal to supercede that one. Maybe I should’ve just retired there and then—I had peaked.

Megan Sapnar Ankerson wrote an article a few years back about How coolness defined the World Wide Web of the 1990s:

The early web was simply teeming with declarations of cool: Cool Sites of the Day, the Night, the Week, the Year; Cool Surf Spots; Cool Picks; Way Cool Websites; Project Cool Sightings. Coolness awards once besieged the web’s virtual landscape like an overgrown trophy collection.

It’s a terrific piece that ponders the changing nature of the web, and the changing nature of that word: cool.

Perhaps the word will continue to fall out of favour. Tim Berners-Lee may have demonstrated excellent foresight when he added this footnote to his classic document, Cool URIs don’t change—still available at its original URL, of course:

Historical note: At the end of the 20th century when this was written, “cool” was an epithet of approval particularly among young, indicating trendiness, quality, or appropriateness.

Other people’s weeknotes

Paul is writing weeknotes. Here’s his latest.

Amy is writing weeknotes. Here’s her latest.

Aegir is writing weeknotes. Here’s his latest.

Nat is writing weeknotes. Here’s their latest.

Alice is writing weeknotes. Here’s her latest.

Mark is writing weeknotes. Here’s his latest.

I enjoy them all.

What a day that was

I woke up in Geneva. The celebrations to mark the 30th anniversary of the World Wide Web were set to begin early in the morning.

It must be said, March 12th 1989 is kind of an arbitrary date. Maybe the date that the first web page went online should mark the birth of the web (though the exact date might be hard to pin down). Or maybe it should be August 6th, 1991—the date that Tim Berners-Lee announced the web to the world (well, to the alt.hypertext mailing list at least). Or you could argue that it should be April 30th, 1993, the date when the technology of the web was officially put into the public domain.

In the end, March 12, 1989 is as good a date as any to mark the birth of the web. The date that Tim Berners-Lee shared his proposal. That’s when the work began.

Exactly thirty years later, myself, Martin, and Remy are registered and ready to attend the anniversay event in the very same room where the existence of the Higgs boson was announced. There’s coffee, and there are croissants, but despite the presence of Lou Montulli, there are no cookies.

Happy birthday, World Wide Web! Love, One third of the https://worldwideweb.cern.ch team at CERN.

The doors to the auditorium open and we find some seats together. The morning’s celebrations includes great panel discussions, and an interview with Tim Berners-Lee himself. In the middle of it all, they show a short film about our hack week recreating the very first web browser.

It was surreal. There we were, at CERN, in the same room as the people who made the web happen, and everyone’s watching a video of us talking about our fun project. It was very weird and very cool.

Afterwards, there was cake. And a NeXT machine—the same one we had in the room during our hack week. I feel a real attachment to that computer.

A NeXT machine from 1989 running the WorldWideWeb browser and my laptop in 2019 running https://worldwideweb.cern.ch

We chatted with lots of lovely people. I had the great pleasure of meeting Peggie Rimmer. It was her late husband, Mike Sendall, who gave Tim Berners-Lee the time (and budget) to pursue his networked hypertext project. Peggie found Mike’s copy of Tim’s proposal in a cupboard years later. This was the copy that Mike had annotated with his now-famous verdict, “vague but exciting”. Angela has those words tattooed on her arm—Peggie got a kick out of that.

Eventually, Remy and I had to say our goodbyes. We had to get to the airport to catch our flight back to London. Taxi, airport, plane, tube; we arrived at the Science Museum in time for the evening celebrations. We couldn’t have been far behind Tim Berners-Lee. He was making a 30 hour journey from Geneva to London to Lagos. We figured seeing him at two out of those three locations was plenty.

This guy again! I think I’m being followed.

By the end of the day we were knackered but happy. The day wasn’t all sunshine and roses. There was a lot of discussion about the negative sides of the web, and what could be improved. A lot of that was from Sir Tim itself. But mostly it was a time to think about just how transformative the web has been in our lives. And a time to think about the next thirty years …and the web we want.

T minus one

I’m back at CERN.

I’m back at CERN because tomorrow, March 12th, 2019, is exactly thirty years on from when Tim Berners-Lee submitted his original “vague but exciting” Information Management: A Proposal. Tomorrow morning, bright and early, there’s an event at CERN called Web@30.

Thanks to my neglibable contribution to the recreation of the WorldWideWeb browser, I’ve wrangled an invitation to attend. Remy and Martin are here too, and I know that the rest of the team are with us in spirit.

I’m so excited about this! I’m such a nerd for web history, it’s going to be like Christmas for me.

If you’re up early enough, you can watch the event on a livestream. The whole thing will be over by mid-morning. Then, Remy and I will take an afternoon flight back to England …just in time for the evening event at London’s Science Museum.

Design sprint?

Our hack week at CERN to reproduce the WorldWideWeb browser was five days long. That’s also the length of a design sprint. So …was what we did a design sprint?

I’m going to say no.

On the surface, our project has all the hallmarks of a design sprint. A group of people who don’t normally work together were thrown into an instense week of problem-solving and building, culminating in a tangible testable output. But when you look closer, the journey itself was quite different. A design sprint is typical broken into five phases, each one mapped on to a day of work:

  1. Understand and Map
  2. Demos and Sketch
  3. Decide and Storyboard
  4. Prototype
  5. Test

Gathered at CERN, hunched over laptops.

There was certainly plenty of understanding, sketching, and prototyping involved in our hack week at CERN, but we knew going in what the output would be at the end of the week. That’s not the case with most design sprints: figuring out what you’re going to make is half the work. In our case, we knew what needed to be produced; we just had to figure out how. Our process looked more like this:

  1. Understand and Map
  2. Research and Sketch
  3. Build
  4. Build
  5. Build

Now you could say that it’s a kind of design sprint, but I think there’s value in reserving the term “design sprint” for the specific five-day process. As it is, there’s enough confusion between the term “sprint” in its agile sense and “design sprint”.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

WorldWideWeb

Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Want to see the final result? Here you go:

worldwideweb.cern.ch

That’s the website we built. The call to action is hard to miss:

Launch WorldWideWeb

Behold! A simulation of using the first ever web browser, recreated inside your web browser.

Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:

  1. Select Document from the menu options on the left.
  2. A new menu will pop open. Select Open from full document reference.
  3. Type a URL, like, say https://adactio.com
  4. Press that lovely chunky Open button.

You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.

But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.

  1. From that Document menu you opened, select New file…
  2. Type the name of your file; something like test.html
  3. Start editing the heading and the text.
  4. In the main WorldWideWeb menu, select Links.
  5. Now focus the window with the document you opened earlier (adactio.com).
  6. With that window’s title bar in focus, choose Mark all from the Links menu.
  7. Go back to your test.html document, and highlight a piece of text.
  8. With that text highlighted, click on Link to marked from the Links menu.

If you want, you can even save the hypertext document you created. Under the Document menu there’s an option to Save a copy offline (this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.

I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!

After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.

Remy did all the JavaScript for the recreated browser …in just five days!

Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.

Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.

John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.

Through it all, Craig and Martin put together the accomanying website. Personally, I think the website is freaking awesome—it’s packed with fascinating information! Check out the family tree of browsers that Craig made.

What a team!

Back at CERN

We got the band back together.

In September of 2013, I had the great pleasure and privilege of going to CERN with a bunch of very smart people. I’m not sure how I managed to slip by. We were there to recreate the experience of using the line-mode browser. As I wrote at the time:

Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.

In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.

I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:

  1. Give people an understanding of the user experience of the WorldWideWeb browser.
  2. Demonstrate that a read/write philosophy was there from the beginning.
  3. Give context—what was going on at the time?

That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.

Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.

I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…

Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.

We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).

We took a little time out for a tour of the data centre. Oh, and at lunch time, we sat with Robert Cailliau and grilled him with questions about the birth of the web. Quite a day!

Now it’s time for me to hit the hay and prepare for another day of hacking in this extraordinary place.