Journal tags: amp



Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.


If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Design maturity on the Clearleft podcast

The latest episode of the Clearleft podcast is zipping through the RSS tubes towards your podcast-playing software of choice. This is episode five, the penultimate episode of this first season.

This time the topic is design maturity. Like the episode on design ops, this feels like a hefty topic where the word “scale” will inevitably come up.

I talked to my fellow Clearlefties Maite and Andy about their work on last year’s design effectiveness report. But to get the big-scale picture, I called up Aarron over at Invision.

What a great guest! I already had plans to get Aarron on the podcast to talk about his book, Designing For Emotion—possibly a topic for next season. But for the current episode, we didn’t even mention it. It was design maturity all the way.

I had a lot of fun editing the episode together. I decided to intersperse some samples. If you’re familiar with Bladerunner and Thunderbirds, you’ll recognise the audio.

The whole thing comes out at a nice 24 minutes in length.

Have a listen and see what you make of it.

Dark mode revisited

I added a dark mode to my website a while back. It was a fun thing to do during Indie Web Camp Amsterdam last year.

I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.

But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”

But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”

Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.

To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.

So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Indie Web Camp London 2020

Do you have plans for the weekend of March 14th and 15th?

If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.

Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.

If you’re wondering whether this is for you, ask yourself if any of this situations apply:

  • You don’t have your own website yet, but you want one.
  • You have your own website, but you need some help with it.
  • You have some ideas about the independent web.
  • You have your own website but you never seem to find the time to update it.
  • You’d like to help other people with their websites.

If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Something for the weekend

Your weekends are valuable. Spend them wisely. I have some suggestion on how you might spend next weekend, October 19th and 20th, depending on where you are in the world.

If you’re in the bay area, or anywhere near San Francisco, I highly recommend that you go to Science Hack Day—two days of science, hacking, and fun. This will be the last one in San Francisco so don’t miss your chance.

If you’re in the south of England, or anywhere near Brighton, come along to Indie Web Camp. Saturday will feature discussions on owning your data. Sunday will be a day of doing. I’ve written about previous Indie Web Camps before, and I really can’t recommend it highly enough!

Do me a favour and register for a spot—it’s free—so I’ve got some idea of numbers. Looking forward to seeing you there!

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
body {
  background-color: var(--background-color);
  color: var(--text-color);
a {
  color: var(--link-color);

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;

But you can also do it in HTML:

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:

<source media="(prefers-color-scheme: dark)" srcset="">
<img src="" alt="map">

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Travel talk

It’s been a busy two weeks of travelling and speaking. Last week I spoke at Finch Conf in Edinburgh, Code Motion in Madrid, and Generate CSS in London. This week I was at Indie Web Camp, View Source, and Fronteers, all in Amsterdam.

The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.

I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.

I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.

The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.

Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.

I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!

Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.

After one day of rest, it was time for Fronteers. This was where myself and Remy gave the joint talk we’ve been working on:

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.

I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.

I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.

After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).

Then there are the places that I can only get to by plane. There’s the United States. I’ll be speaking at An Event Apart in San Francisco in December. A flight is unavoidable. Last time we went to the States, Jessica and I travelled by ocean liner. But that isn’t any better for the environment, given the low-grade fuel burned by ships.

And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.

Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.

Opening up the AMP cache

I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…

The AMP format

Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!

But I cannot get behind AMP.

Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.

The AMP ecosystem

Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:

This is where I get confused… only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?

See also: Google AMP lowered our page speed, and there’s no choice but to use it:

According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.

Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:

We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.

Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.

The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:

Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

From A letter about Google AMP:

Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.

That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.

The AMP cache

If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!

That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.

If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.

Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.

Here’s that AMP letter again:

When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.

Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.

But why host the pages on a Google domain? Why not prerender the original URLs?

Prerendering and privacy

Scott summed up the situation with AMP nicely:

The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.

But perhaps we could de-couple the AMP format from the AMP cache.

That’s what Terence suggests:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

The AMP letter, too:

Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.

Scott reiterates:

It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.

This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.

Here’s the problem…

Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.

And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.

Still, it’s a real shame to miss out on the speed benefit of prerendering:

Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.

A modest proposal

Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)

What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain— or or whatever—just as currently happens with AMP pages.

Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:

  1. Hosting needs to be opt-in.
  2. Only fast pages should be prerendered.

Opting in

Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.

This could be a meta element. Maybe something like:

<meta name="caches-allowed" content="google">

This would have the nice benefit of allowing comma-separated values:

<meta name="caches-allowed" content="google, yandex">

(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)

If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.

Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.

Criteria for prerendering

Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.

Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:

  • Page Speed Index
  • Lighthouse
  • Web Page Test

Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.

That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:

Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.


This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.

In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.

What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.

This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.


Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.

I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.


Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.

If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.

This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.

Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.


  1. Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
  2. Currently, that’s only done for pages using the AMP format.
  3. The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.

There will be technical challenges, but hopefully nothing insurmountable.

I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).

I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.

I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)

What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?

I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”

I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”

And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.

Google, open up the AMP cache.

Passenger’s log, Queen Mary 2, August 2019

Passenger’s log, day one: Sunday, August 11, 2019

We took the surprisingly busy train from Brighton to Southampton, with our plentiful luggage in tow. As well as the clothes we’d need for three weeks of hot summer locations in the United States, Jessica and I were also carrying our glad rags for the shipboard frou-frou evenings.

Once the train arrived in Southampton, we transferred our many bags into the back of a taxi and made our way to the terminal. It looked like all the docks were occupied, either with cargo ships, cruise ships, or—in the case of the Queen Mary 2—the world’s last ocean liner to be built.

Check in. Security. Then it was time to bid farewell to dry land as we boarded the ship. We settled into our room—excuse me, stateroom—on the eighth deck. That’s the deck that also has the lifeboats, but our balcony is handily positioned between two boats, giving us a nice clear view.

We’d be sailing in a few hours, so that gave us plenty of time to explore the ship. We grabbed a suprisingly tasty bite to eat in the buffet restaurant, and then went out on deck (the promenade deck is deck seven, just one deck below our room).

It was a blustery day. All weekend, the UK newspaper headlines had been full of dramatic stories of high winds. Not exactly sailing weather. But the Queen Mary 2 is solid, sturdy, and just downright big, so once we were underway, the wind was hardly noticable …indoors. Out on the deck, it could get pretty breezy.

By pure coincidence, we happened to be sailing on a fortuituous day: the meeting of the queens. The Queen Elizabeth, the Queen Victoria, and the Queen Mary 2 were all departing Southampton at the same time. It was a veritable Cunard convoy. With the yacht race on as well, it was a very busy afternoon in the Solent.

We stayed out on the deck as our ship powered out of Southampton, and around the Isle of Wight, passing a refurbished Palmerston sea fort on the way.

Alas, Jessica had a migraine brewing all day, so we weren’t in the mood to dive into any social activities. We had a low-key dinner from the buffet—again, surprisingly tasty—and retired for the evening.

Passenger’s log, day two: Monday, August 12, 2019

Jessica’s migraine passed like a fog bank in the night, and we woke to a bright, blustery day. The Queen Mary 2 was just passing the Scilly Isles, marking the traditional start of an Atlantic crossing.

Breakfast was blissfully quiet and chilled out—we elected to try the somewhat less-trafficked Carinthia lounge; the location of a decent espresso-based coffee (for a price). Then it was time to feed our minds.

We watched a talk on the Bolshoi Ballet, filled with shocking tales of scandal. Here I am on holiday, and I’m sitting watching a presentation as though I were at a conference. The presenter in me approved of some of the stylistic choices: tasteful transitions in Keynote, and suitably legible typography for on-screen quotes.

Soon after that, there was a question-and-answer session with a dance teacher from the English National Ballet. We balanced out the arts with some science by taking a trip to the planetarium, where the dulcet voice of Neil De Grasse Tyson told the tale of dark matter. A malfunctioning projector somewhat tainted the experience, leaving a segment of the dome unilliminated.

It was a full morning of activities, but after lunch, there was just one time and place that mattered: sign ups for the week’s ballet workshops would take place at 3pm on deck two. We wandered by at 2pm, and there was already a line! Jessica quickly took her place in the queue, hoping that she’d make into the workshops, which have a capacity of just 30 people. The line continued to grow. The Cunard staff were clearly not prepared for the level of interest in these ballet workshops. They quickly introduced some emergency measures: this line would only be for the next two day’s workshops, rather than the whole week. So there’d be more queueing later in the week for anyone looking to take more than one workshop.

Anyway, the most important outcome was that Jessica did manage to sign up for a workshop. After all that standing in line, Jessica was ready for a nice sit down so we headed to the area designated for crafters and knitters. As Jessica worked on the knitting project she had brought along, we had our first proper social interactions of the voyage, getting to know the other makers. There was much bonding over the shared love of the excellent Ravelry website.

Next up: a pub quiz at sea in a pub at sea. I ordered the flight of craft beers and we put our heads together for twenty quickfire trivia questions. We came third.

After that, we rested up for a while in our room, before donning our glad rags for the evening’s gala dinner. I bought a tuxedo just for this trip, and now it was time to put it into action. Jessica donned a ballgown. We both looked the part for the black-and-white themed evening.

We headed out for pre-dinner drinks in the ballroom, complete with big band. At one entrance, there was a receiving line to meet the captain. Having had enough of queueing for one day, we went in the other entrance. With glasses of sparkling wine in hand, we surveyed our fellow dressed-up guests who were looking in equal measure dashingly cool and slightly uncomfortable.

After some amusing words from the captain, it was time for dinner. Having missed the proper sit-down dinner the evening before, this was our first time finding out what table we had. We were bracing ourselves for an evening of being sociable, chit-chatting with whoever we’ve been seated with. Your table assignment was the same for the whole week, so you’d better get on well with your tablemates. If you’re stuck with a bunch of obnoxious Brexiteers, tough luck; you just have to suck it up. Much like Brexit.

We were shown to our table, which was …a table for two! Oh, the relief! Even better, we were sitting quite close to the table of ballet dancers. From our table, Jessica could creepily stalk them, and observe them behaving just like mere mortals.

We settled in for a thoroughly enjoyable meal. I opted for an array of pale-coloured foods; cullen skink, followed by seared scallops, accompanied by a Chablis Premier Cru. All this while wearing a bow tie, to the sounds of a string quartet. It felt like peak Titanic.

After dinner, we had a nightcap in the elegant Chart Room bar before calling it a night.

Passenger’s log, day three: Tuesday, August 13

We were woken early by the ship’s horn. This wasn’t the seven-short-and-one-long blast that would signal an emergency. This was more like the sustained booming of a foghorn. In fact, it effectively was a foghorn, because we were in fog.

Below us was the undersea mountain range of the Maxwell Fracture Zone. Outside was a thick Atlantic fog. And inside, we were nursing some slightly sore heads from the previous evening’s intake of wine.

But as a nice bonus, we had an extra hour of sleep. As long as the ship is sailing west, the clocks get put back by an hour every night. Slowly but surely, we’ll get on New York time. Sure beats jetlag.

After a slow start, we sautered downstairs for some breakfast and a decent coffee. Then, to blow out the cobwebs, we walked a circuit of the promenade deck, thereby swapping out bed head for deck head.

It was then time for Jessica and I to briefly part ways. She went to watch the ballet dancers in their morning practice. I went to a lecture by Charlie Barclay from the Royal Astronomical Society, and most edifying it was too (I wonder if I can convince him to come down to give a talk at Brighton Astro sometime?).

After the lecture was done, I tracked down Jessica in the theatre, where she was enraptured by the dancers doing their company class. We stayed there as it segued into the dancers doing a dress rehearsal for their upcoming performance. It was fascinating, not least because it was clear that the dancers were having to cope with being on a slightly swaying moving vessel. That got me wondering: has ballet ever been performed on a ship before? For all I know, it might have been a common entertainment back in the golden age of ocean liners.

We slipped out of the dress rehearsal when hunger got the better of us, and we managed to grab a late lunch right before the buffet closed. After that, we decided it was time to check out the dog kennels up on the twelfth deck. There are 24 dogs travelling on the ship. They are all good dogs. We met Dillinger, a good dog on his way to a new life in Vancouver. Poor Dillinger was struggling with the circumstances of the voyage. But it’s better than being in the cargo hold of an airplane.

While we were up there on the top of the ship, we took a walk around the observation deck right above the bridge. The wind made that quite a tricky perambulation.

The rest of our day was quite relaxed. We did the pub quiz again. We got exactly the same score as we did the day before. We had a nice dinner, although this time a tuxedo was not required (but a jacket still was). Lamb for me; beef for Jessica; a bottle of Gigondas for both of us.

After dinner, we retired to our room, putting our clocks and watches back an hour before climbing into bed.

Passenger’s log, day four: Wednesday, August 14, 2019

After a good night’s sleep, we were sauntering towards breakfast when a ship’s announcement was made. This is unusual. Ship’s announcements usually happen at noon, when the captain gives us an update on the journey and our position.

This announcement was dance-related. Contradicting the listed 5pm time, sign-ups for the next ballet workshops would be happening at 9am …which was in 10 minutes time. Registration was on deck two. There we were, examining the breakfast options on deck seven. Cue a frantic rush down the stairwells and across the ship, not helped by me confusing our relative position to fore and aft. But we made it. Jessica got in line, and she was able to register for the workshop she wanted. Crisis averted.

We made our way back up to breakfast, and our daily dose of decent coffee. Then it was time for a lecture that was equally fascinating for me and Jessica. It was Physics En Pointe by Dr. Merritt Moore, ballet dancer and quantum physicist. This was a scene-setting talk, with her describing her life’s journey so far. She’ll be giving more talks throughout the voyage, so I’m hoping for some juicy tales of quantum entanglement (she works in quantum optics, generating entangled photons).

After that, it was time for Jessica’s first workshop. It was a general ballet technique workshop, and they weren’t messing around. I sat off to the side, with a view out on the middle of the Atlantic ocean, tinkering with some code for The Session, while Jessica and the other students were put through their paces.

Then it was time to briefly part ways again. While Jessica went to watch the ballet dancers doing their company class, I was once again attending a lecture by Charles Barclay of the Royal Astronomical Society. This time it was archaeoastronomy …or maybe it was astroarcheology. Either way, it was about how astronomical knowledge was passed on in pre-writing cultures, with a particular emphasis on neolithic sites like Avebury.

When the lecture was done, I rejoined Jessica and we watched the dancers finish their company class. Then it was time for lunch. We ate from the buffet, but deliberately avoided the heavier items, opting for a relatively light salad and sushi combo. This good deed would later be completely undone with a late afternoon cake snack.

We went to one more lecture. Three in one day! It really is like being at a conference. This one, by John Cooper, was on the Elizabethan settlers of Roanoke Island. So in one day, I managed to get a dose of history, science, and culture.

With the day’s workshops and lectures done, it was once again time to put on our best garb for the evening’s gala dinner. All tux’d up, I escorted Jessica downstairs. Tonight was the premier of the ballet performance. But before that, we wandered around drinking champagne and looking fabulous. I even sat at an otherwise empty blackjack table and promptly lost some money. I was a rubbish gambler, but—and this is important—I was a rubbish gambler wearing a tuxedo.

We got good seats for the ballet and settled in for an hour’s entertainment. There were six pieces, mostly classical. Some Swan Lake, some Nutcracker, and some Le Corsaire. But there was also something more modern in there—a magnificent performance from Akram Khan’s Dust. We had been to see Dust at Sadlers Wells, but I had forgotten quite how powerful it is.

After the performance, we had a quick cocktail, and then dinner. The sommelier is getting chattier and chattier with us each evening. I think he approves of our wine choices. This time, we left the vineyards of France, opting for a Pinot Noir from Central Otago.

After one or two nightcaps, we went back to our cabin and before crashing out, we set our clocks back an hour.

Passenger’s log, day five: Thursday, August 15, 2019

We woke to another foggy morning. The Queen Mary 2 was now sailing through the shallower waters of the Grand Banks of Newfoundland. Closer and closer to North America.

This would be my fifth day with virtually no internet access. I could buy WiFi internet access at exorbitant satellite prices, but I hadn’t felt any need to do that. I could also get a maritime mobile phone signal—very slow and very expensive.

I’ve been keeping my phone in airplane mode. Once a day, I connect to the mobile network and check just one website——just to make sure nothing’s on fire there. Fortunately, because I made the site, I know that the data transfer will be minimal. Each page of HTML is between 30K and 90K. There are no images to speak of. And because I’ve got the site’s service worker installed on my phone, I know that CSS and JavaScript is coming straight from a cache.

I’m not missing Twitter. I’m certainly not missing email. The only thing that took some getting used to was not being able to look things up. On the first few days of the crossing, both Jessica and I found ourselves reaching for our phones to look up something about ships or ballet or history …only to remember that we were enveloped in a fog of analogue ignorance, with no sign of terra firma digitalis.

It makes the daily quiz quite challenging. Every morning, twenty questions are listed on sheets of paper that appear at the entrance to the library. This library, by the way, is the largest at sea. As Jessica noted, you can tell a lot about the on-board priorities when the ship’s library is larger than the ship’s casino.

Answers to the quiz are to be handed in by 4pm. In the event of a tie, the team who hands in their answers earliest wins. You’re not supposed to use the internet, but you are positively encouraged to look up answers in the library. Jessica and I have been enjoying this old-fashioned investigative challenge.

With breakfast done before 9am, we had a good hour to spend in the library researching answers to the day’s quiz before Jessica needed to be at her 10am ballet workshop. Jessica got started with the research, but I quickly nipped downstairs to grab a couple of tickets for the planetarium show later that day.

Tickets for the planetarium shows are released every morning at 9am. I sauntered downstairs and arrived at the designated ticket-release location a few minutes before nine, where I waited for someone to put the tickets out. When no tickets appeared five minutes after nine, I wasn’t too worried. But when there were still no tickets at ten past nine, I grew concerned. By quarter past nine, I was getting a bit miffed. Had someone forgotten their planetarium ticket duties?

I found a crewmember at a nearby desk and asked if anyone was going to put out planetarium tickets. No, I was told. The tickets all went shortly after 9am. But I’ve been here since before 9am, I said! Then it dawned on me. The ship’s clocks didn’t go back last night after all. We just assumed they did, and dutifully changed our watches and phones accordingly.

Oh, crap—Jessica’s workshop! I raced back up five decks to the library where Jessica was perusing reference books at her leisure. I told her the bad news. We dashed down to the workshop ballroom anyway, but of course the class was now well underway. After all the frantic dashing and patient queueing that Jessica did yesterday to scure her place on the workshop! Our plans for the day were undone by our being too habitual with our timepieces. No ballet workshop. No planetarium show. I felt like such an idiot.

Well, we still had a full day of activities. There was a talk with ballet dancer, James Streeter (during which we found out that the captain had deployed all the ships stabilisers during the previous evening’s performance). We once again watched the ballet dancers doing their company class for an hour and a half. We went for afternoon tea, complete with string quartet and beautiful view out on the ocean, now mercifully free of fog.

We attended another astronomy lecture, this time on eclipses. But right before the lecture was about to begin, there was a ship-wide announcement. It wasn’t midday, so this had to be something unusual. The captain informed us that a passenger was seriously ill, and the Canadian coastguard was going to attempt a rescue. The ship was diverting closer to Newfoundland to get in helicopter range. The helicopter wouldn’t be landing, but instead attempting a tricky airlift in about twenty minutes time. And so we were told to literally clear the decks. I assume the rescue was successful, and I hope the patient recovers.

After that exciting interlude, things returned to normal. The lecture on eclipses was great, focusing in particular on the magificent 2017 solar eclipse across America.

It’s funny—Jessica and I are on this crossing because it was a fortunate convergence of ballet and being on a ship. And in 2017 we were in Sun Valley, Idaho because of a fortunate convergence of ballet and experiencing a total eclipse of the sun.

I’m starting to sense a theme here.

Anyway, after all the day’s dancing and talks were done, we sat down to dinner, where Jessica could once again surreptitiously spy on the dancers at a nearby table. We cemented our bond with the sommelier by ordering a bottle of the excellent Lebanese Château Musar.

When we got back to our room, there was a note waiting for us. It was an invitation for Jessica to take part in the next day’s ballet workshop! And, looking at the schedule for the next day, there was going to be repeats of the planetarium shows we missed today. All’s well that ends well.

Before going to bed, we did not set our clocks back.

Passenger’s log, day six: Friday, August 16, 2019

This morning was balletastic:

  • Jessica’s ballet workshop.
  • Watching the ballet dancers doing their company class.
  • Watching a rehearsal of the ballet performance.

The workshop was quite something. Jennie Harrington—who retired from dancing with Dust—took the 30 or so attendees through some of the moves from Akram Khan’s masterpiece. It looked great!

While all this was happening inside the ship, the weather outside was warming up. As we travel further south, the atmosphere is getting balmier. I spent an hour out on a deckchair, dozing and reading.

At one point, a large aircraft buzzed us—the Canadian coastguard perhaps? We can’t be that far from land. I think we’re still in international waters, but these waters have a Canadian accent.

After soaking up the salty sea air out on the bright deck, I entered the darkness of the planetarium, having successfully obtained tickets that morning by not having my watch on a different time to the rest of the ship.

That evening, there was a gala dinner with a 1920s theme. Jessica really looked the part—like a real flapper. I didn’t really make an effort. I just wore my tuxedo again. It was really fun wandering the ship and seeing all the ornate outfits, especially during the big band dance after dinner. I felt like I was in a photo on the wall of the Overlook Hotel.

Dressed for the 1920s.

Passenger’s log, day seven: Saturday, August 17, 2019

Today was the last full day of the voyage. Tomorrow we disembark.

We had a relaxed day, with the usual activities: a lecture or two; sitting in on the ballet company class.

Instead of getting a buffet lunch, we decided to do a sit-down lunch in the restaurant. That meant sitting at a table with other people, which could’ve been awkward, but turned out to be fine. But now that we’ve done the small talk, that’s probably all our social capital used up.

The main event today was always going to be the reprise and final performance from the English National Ballet. It was an afternoon performance this time. It was as good, if not better, the second time around. Bravo!

Best of all, after the performance, Jessica got to meet James Streeter and Erina Takahashi. Their performance from Dust was amazing, and we gushed with praise. They were very gracious and generous with their time. Needless to say, Jessica was very, very happy.

Shortly before the ballet performance, the captain made another unscheduled announcement. This time it was about a mechanical issue. There was a potential fault that needed to be investigated, which required stopping the ship for a while. Good news for the ballet dancers!

Jessica and I spent some time out on the deck while the ship was stopped. It’s was a lot warmer out there compared to just a day or two before. It was quite humid too—that’ll help us start to acclimatise for New York.

We could tell that we were getting closer to land. There are more ships on the horizon. From the amount of tankers we saw today, the ship must have passed close to a shipping lane.

We’re going to have a very early start tomorrow—although luckily the clocks will go back an hour again. So we did as much of our re-packing as we could this evening.

With the packing done, we still had some time to kill before dinner. We wandered over to the swanky Commodore Club cocktail bar at the fore of the ship. Our timing was perfect. There were two free seats positioned right by a window looking out onto the beautiful sunset we were sailing towards. The combination of ocean waves, gorgeous sunset, and very nice drinks ensured we were very relaxed when we made our way down to dinner.

Sailing into the sunset.

At the entrance of the dining hall—and at the entrance of any food-bearing establishment on board—there are automatic hand sanitiser dispensers. And just in case the automated solution isn’t enough, there’s also a person standing there with a bottle of hand sanitiser, catching your eye and just daring you to refuse an anti-bacterial benediction. As the line of smartly dressed guests enters the restaurant, this dutiful dispenser of cleanliness anoints the hands of each one; a priest of hygiene delivering a slightly sticky sacrament.

The paranoia is justified. A ship is a potential petri dish at sea. In my hometown of Cobh in Ireland, the old cemetery is filled with the bodies of foreign sailors whose ships were quarantined in the harbour at the first sign of cholera or smallpox. While those diseases aren’t likely to show up on the Queen Mary 2, if norovirus were to break out on the ship, it could potentially spread quickly. Hence the war on hand-based microbes.

Maybe it’s because I’ve just finished reading Ed Yong’s excellent book I contain multitudes, but I can’t help but wonder about our microbiomes on board this ship. Given enough time, would the microbiomes of the passengers begin to sync up? Maybe on a longer voyage, but this crossing almost certainly doesn’t afford enough time for gut synchronisation. This crossing is almost done.

Passenger’s log, day eight: Sunday, August 18, 2019

Jessica and I got up at 4:15am. This is an extremely unusual occurance for us. But we were about to experience something very out of the ordinary.

We dressed, looked unsuccessfully for coffee, and made our way on to the observation deck at the top of the ship. Land ho! The lights of New Jersey were shining off the port side of the ship. The lights of long island were shining off the starboard side. And dead ahead was the string of lights marking the Verrazano-Narrows Bridge.

The Queen Mary 2 was deliberately designed to pass under this bridge …just. The bridge has a clearance of 228 feet. The Queen Mary 2 is 236.2 feet, keel to funnel. That’s a difference of just 8.2 feet. Believe me, that doesn’t look like much when you’re on the top deck of the ship, standing right by the tallest mast.

The distant glow of New York was matched by the more localised glow of mobile phone screens on the deck. Passengers took photos constantly. Sometimes they took photos with flash, demonstrating a fundamental misunderstanding of how you photograph distant objects.

The distant object that everyone was taking pictures of was getting less and less distant. The Statue of Liberty was coming up on our port side.

I probably should’ve felt more of a stirring at the sight of this iconic harbour sculpture. The familiarity of its image might have dulled my appreciation. But not far from the statue was a dark area, one of the few pieces of land without lights. This was Ellis Island. If the Statue of Liberty was a symbol of welcome for your tired, your poor, your huddled masses yearning to breathe free, then Ellis Island was where the immigration rubber met the administrative road. This was where countless Irish migrants first entered the United States of America, bringing with them their songs, their stories, and their unhealthy appreciation for potatoes.

Before long, the sun was rising and the Queen Mary 2 was parallel parking at the Red Hook terminal in Brooklyn. We went back belowdecks and gathered our bags from our room. Rather than avail of baggage assistance—which would require us to wait a few hours before disembarking—we opted for “self help” dismembarkation. Shortly after 7am, our time on board the Queen Mary 2 was at an end. We were in the first group of passengers off the ship, and we sailed through customs and immigration.

Within moments of being back on dry land, we were in a cab heading for our hotel in Tribeca. The cab driver took us over the Brooklyn Bridge, explaining along the way how a cash payment would really be better for everyone in this arrangement. I didn’t have many American dollars, but after a bit of currency haggling, we agreed that I could give him the last of the Canadian dollars I had in my wallet from my recent trip to Vancouver. He’s got family in Canada, so this is a win-win situation.

It being a Sunday morning, there was no traffic to speak of. We were at our hotel in no time. I assumed we wouldn’t be able to check in for hours, but at least we’d be able to leave our bags there. I was pleasantly surprised when I was told that they had a room available! We checked in, dropped our bags, and promptly went in search of coffee and breakfast. We were tired, sure, but we had no jetlag. That felt good.

I connected to the hotel’s WiFi and went online for the first time in eight days. I had a lot of spam to delete, mostly about cryptocurrencies. I was back in the 21st century.

After a week at sea, where the empty horizon was visible in all directions, I was now in a teeming mass of human habitation where distant horizons are rare indeed. After New York, I’ll be heading to Saint Augustine in Florida, then Chicago, and finally Boston. My arrival into Manhattan marks the beginning of this two week American odyssey. But this also marks the end of my voyage from Southampton to New York, and with it, this passenger’s log.

Register for Indie Web Camp Brighton 2019

Back at the end of May, I wrote:

We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.

It’s going to be a lot of fun!

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

Check out the previous Brighton Indie Web Camps:

See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!

Indie web events in Brighton

Homebrew Website Club is a regular gathering of people getting together to tinker on their own websites. It’s a play on the original Homebrew Computer Club from the ’70s. It shares a similar spirit of sharing and collaboration.

Homebrew Website Clubs happen at various locations: London, San Francisco, Portland, Nuremberg, and more. Usually there on every second Wednesday.

I started running Homebrew Website Club Brighton a while back. I tried the “every second Wednesday” thing, but it was tricky to make that work. People found it hard to keep track of which Wednesdays were Homebrew days and which weren’t. And if you missed one, then it would potentially be weeks between attending.

So I’ve made it a weekly gathering. On Thursdays. That’s mostly because Thursdays work for me: that’s one of the evenings when Jessica has her ballet class, so it’s the perfect time for me to spend a while in the company of fellow website owners.

If you’re in Brighton and you have your own website (or you want to have your own website), you should come along. It’s every Thursday from 6pm to 7:30pm ‘round at the Clearleft studio on 68 Middle Street. Add it to your calendar.

There might be a Thursday when I’m not around, but it’s highly likely that Homebrew Website Club Brighton will happen anyway because either Trys, Benjamin or Cassie will be here.

(I’m at Homebrew Website Club Brighton right now, writing this. Remy is here too, working on some very cool webmention stuff.)

There’s something else you should add to your calendar. We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).

If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.

It’s been a while since we’ve had an Indie Web Camp in Brighton. You can catch up on the Brighton Indie Web Camps we had in 2014, 2015, and 2016. Since then I’ve been to Indie Web Camps in Berlin, Nuremberg, and Düsseldorf, but it’s going to be really nice to bring it back home.

Indie Web Camp UK attendees Indie Web Camp Brighton group photo IndieWebCampBrighton2016

The event will be free to attend, but I’ll set up an official ticket page on to keep track of who’s coming. I’ll let you know when that’s up and ready. In the meantime, you can register your interest in attending on the 2019 Indie Webcamp Brighton page on the Indie Web wiki.


Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

        <label for="replyto">Reply to</label>
    <input type="url" id="replyto" name="replyto">

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”? I would say Falcor from Neverending Story, the big flying dog.

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.


I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.


There’s a veritable smörgåsbord of great workshops on the horizon…

Clearleft presents a workshop with Jan Chipchase on field research in London on May 29th, and again on May 30th. The first day is sold out, but there are still tickets available for the second workshop (tickets are £654). If you’ve read Jan’s beautiful Field Study Handbook, then you’ll know what a great opportunity it is to spend a day in his company. But don’t dilly-dally—that second day is likely to sell out too.

This event is for product teams, designers, researchers, insights teams, in agencies, in-house, local and central government. People who are curious about human interaction, and their place in the world.

I’m really excited that Sarah and Val are finally bringing their web animation workshop to Brighton (I’ve been not-so-subtly suggesting that they do this for a while now). It’s a two day workshop on July 9th and 10th. There are still some tickets available, but probably not for much longer (tickets are £639). The workshop is happening at 68 Middle Street, the home of Clearleft.

This workshop will get you up and running with web animation in less time than it would take to read all the tutorials you have bookmarked. Over two days, you’ll go from beginner or novice web animator to having expert level knowledge of the current web animation landscape. You’ll get an in-depth look at animating with CSS, JavaScript, and SVG through hands-on exercises and learn the most efficient workflows for each.

A bit before that, though, there’s a one-off workshop on responsive web typography from Rich on Thursday, June 29th, also at 68 Middle Street. You can expect the same kind of brilliance that he demonstrated in his insta-classic Web Typography book, but delivered by the man himself.

You will learn how to combine centuries-old craft with cutting edge technology, including variable fonts, to design and develop for screens of all shapes and sizes, and provide the best reading experiences for your modern readers.

Whether you’re a designer or a developer, just starting out or seasoned pro, there will be plenty in this workshop to get your teeth stuck into.

Tickets are just £435, and best of all, that includes a ticket to the Ampersand conference the next day (standalone conference tickets are £235 so the workshop/conference combo is a real bargain). This year’s Ampersand is shaping up to be an unmissable event (isn’t it always?), so the workshop is like an added bonus.

See you there!