If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.
Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.
I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.
I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.
“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.
I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.
You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.
It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.
Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.
You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)
I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.
This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.
In person events are like buses. You go two years without one and then three come along at once.
My buffer is overflowing from experiencing three back-to-back events. Best of all, my participation was different each time.
First of all, there was Leading Design New York, where I was the host. The event was superb, although it’s a bit of a shame I didn’t have any time to properly experience Manhattan. I wasn’t able to do any touristy things or meet up with my friends who live in the city. Still the trip was well worth it.
Right after I got back from New York, I took the train to Edinburgh for the Design It Build It conference where I was a speaker. It was a good event. I particularly enjoyed Rafaela Ferro talk on accessibility. The last time I spoke at DIBI was 2011(!) so it was great to make a return visit. I liked that the audience was seated cabaret style. That felt safer than classroom-style seating, allowing more space between people. At the same time, it felt more social, encouraging more interaction between attendees. I met some really interesting people.
I got from Edinburgh just in time for UX Camp Brighton on the weekend, where I was an attendee. I felt like a bit of a moocher not giving a presentation, but I really, really enjoyed every session I attended. It’s been a long time since I’ve been at a Barcamp-style event—probably the last Indie Web Camp I attended, whenever that was. I’d forgotten how well the format works.
But even with all these in-person events, online events aren’t going anywhere anytime soon. Yesterday I started hosting the online portion of Leading Design New York and I’ll be doing it again today. The post-talk discussions with Julia and Lisa are lots of fun!
So in the space of just of a couple of weeks I’ve been a host, a speaker, and an attendee. Now it’s time for me to get my head back into one other event role: conference curator. No more buses/events are on the way for the next while, so I’m going to be fully devoted to organising the line-up for UX London 2022. Exciting!
Chrome Dev Summit kicked off yesterday. The opening keynote had its usual share of announcements.
There was quite a bit of talk about privacy, which sounds good in theory, but then we were told that Google would be partnering with “industry stakeholders.” That’s probably code for the kind of ad-tech sharks that have been making a concerted effort to infest W3C groups. Beware.
But once Una was on-screen, the topics shifted to the kind of design and development updates that don’t have sinister overtones.
We’re also partnering with Jeremy Keith of Clearleft to launch Learn Responsive Design on web.dev. This is a free online course with everything you need to know about designing for the new responsive web of today.
This is what’s been keeping me busy for the past few months (and for the next month or so too). I’ve been writing fifteen pieces—or “modules”—on modern responsive web design. One third of them are available now at web.dev/learn/design:
The rest are on their way: typography, responsive images, theming, UI patterns, and more.
I’ve been enjoying this process. It’s hard work that requires me to dive deep into the nitty-gritty details of lots of different techniques and technologies, but that can be quite rewarding. As is often said, if you truly want to understand something, teach it.
Oh, and I made one more appearance at the Chrome Dev Summit. During the “Ask Me Anything” section, quizmaster Una asked the panelists a question from me:
Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?
(Thanks to Jake for helping craft the question into a form that could make it past the legal department but still retain its spiciness.)
The question got a response. I wouldn’t say it got an answer. My verdict remains:
I’m not sure that Google Chrome can be considered a user agent.
The fundamental issue is that you’ve got a single company that’s the market leader in web search, the market leader in web advertising, and the market leader in web browsers. I honestly believe all three would function better—and more honestly—if they were separate entities.
Monopolies aren’t just damaging for customers. They’re damaging for the monopoly too. I’d love to see Google Chrome compete on being a great web browser without having to also balance the needs of surveillance-based advertising.
I’ve spent the time since then participating in good faith, but I can’t do that any longer. Here’s what I wrote in my resignation email:
Hi all,
As mentioned at the end of the last call, I’m stepping down from the AMP advisory committee.
I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.
If I were to remain on the advisory committee, my feelings of resentment about this situation would inevitably affect my behaviour. So it’s best for everyone if I step away now instead of descending into outright sabotage. It’s not you, it’s me.
I’d like to thank the OpenJS Foundation for allowing me to participate. It’s been an honour to watch Tobie and Jory in action.
I wish everyone well and I hope that the advisory committee can successfully guide the AMP project towards a happy place where it can live out its final days in peace.
I don’t have a replacement candidate to nominate but I’ll ask around amongst other independent sceptical folks to see if there’s any interest.
This is an interesting time for AMP …whatever AMP is.
See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is.
There’s the collection of web components. If that were all AMP is, it would be a very straightforward project, similar to other collections of web components (like Polymer). But then there’s the concept of validation. The validation comes from a set of rules, defined by Google. And there’s the AMP cache, or more accurately, Google hosting.
Only one piece of that trinity—the collection of web components—is eligible for the label of being open source, and even that’s a stretch considering that most of the contributions come from full-time Google employees. The other two parts are firmly under Google’s control.
I was hoping it was a marketing problem. We spent a lot of time on the advisory committee trying to figure out ways of making it clearer what AMP actually is. But it was a losing battle. The phrase “the AMP project” is used to cover up the deeply interwingled nature of its constituent parts. Bits of it are open source, but most of it is proprietary. The OpenJS Foundation doesn’t seem like a good home for a mostly-proprietary project.
Whenever a representative from Google showed up at an advisory committee meeting, it was clear that they viewed AMP as a Google product. I never got the impression that they planned to hand over control of the project to the OpenJS Foundation. Instead, they wanted to hear what people thought of their project. I’m not comfortable doing that kind of unpaid labour for a large profitable organisation.
Even worse, Google representatives reminded us that AMP was being used as a foundational technology for other Google products: stories, email, ads, and even some weird payment thing in native Android apps. That’s extremely worrying.
While I was serving on the AMP advisory committee, a coalition of attorneys general filed a suit against Google for anti-competitive conduct:
Google designed AMP so that users loading AMP pages would make direct communication with Google servers, rather than publishers’ servers. This enabled Google’s access to publishers’ inside and non-public user data.
We were immediately told that we could not discuss an ongoing court case in the AMP advisory committee. That’s fair enough. But will it go both ways? Or will lawyers acting on Google’s behalf be allowed to point to the AMP advisory committee and say, “But AMP is an open source project! Look, it even resides under the banner of the OpenJS Foundation.”
If there’s even a chance of the AMP advisory committee being used as a Potempkin village, I want no part of it.
But even as I’m noping out of any involvement with Google AMP, my parting words have to be about how impressed I am with the OpenJS Foundation. Jory and Tobie have been nothing less than magnificent in their diplomacy, cat-herding, schedule-wrangling, timekeeping, and other organisational superpowers that I’m crap at.
I sincerely hope that Google isn’t taking advantage of the OpenJS Foundation’s kind-hearted trust.
I remember trying to convince people to use semantic markup because it’s good for accessibility. That tactic didn’t always work. When it didn’t, I would add “By the way, Google’s searchbot is indistinguishable from a screen-reader user so semantic markup is good for SEO.”
That usually worked. It always felt unsatisfying though. I don’t know why. It doesn’t matter if people do the right thing for the wrong reasons. The end result is what matters. But still. It never felt great.
It happened with responsive design and progressive enhancement too. If I couldn’t convince people based on user experience benefits, I’d pull up someofficialpronouncement from Google recommending those techniques.
Even AMP, a dangerously ill-conceived project, has one very handy ace in the hole. You can’t add third-party JavaScript cruft to AMP pages. That’s useful:
Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role.
AMP is currently dying, which is good news. Google have announced that core web vitals will be used to boost ranking instead of requiring you to publish in their proprietary AMP format. The really good news is that the political advantage that came with AMP has also been ported over to core web vitals.
Take user-hostile obtrusive overlays. Perhaps, as a contientious developer, you’ve been arguing for years that they should be removed from the site you work on because they’re so bad for the user experience. Perhaps you have been met with the same indifference that I used to get regarding semantic markup.
Well, now you can point out how those annoying overlays are affecting, for example, the cumulative layout shift for the site. And that number is directly related to SEO. It’s one thing for a department to over-ride UX concerns, but I bet they’d think twice about jeopardising the site’s ranking with Google.
I know it doesn’t feel great. It’s like dealing with a bully by getting an even bigger bully to threaten them. Still. Needs must.
Core web vitals from Google are the ingredients for an alphabet soup of exlusionary intialisms. But once you get past the unnecessary jargon, there’s a sensible approach underpinning the measurements.
From May—no, June—these measurements will be a ranking signal for Google search so performance will become more of an SEO issue. This is good news. This is what Google should’ve done years ago instead of pissing up the wall with their dreadful and damaging AMP project that blackmailed publishers into using a proprietary format in exchange for preferential search treatment. It was all done supposedly in the name of performance, but in reality all it did was antagonise users and publishers alike.
Core web vitals are an attempt to put numbers on user experience. This is always a tricky balancing act. You’ve got to watch out for the McNamara fallacy. Harry has already started noticing this:
A new and unusual phenomenon: clients reluctant (even refusing) to fix performance issues unless they directly improve Vitals.
Once you put a measurement on something, there’s a danger of focusing too much on the measurement. Chris is worried that we’re going to see tips’n’tricks for gaming core web vitals:
This feels like the start of a weird new era of web performance where the metrics of web performance have shifted to user-centric measurements, but people are implementing tricky strategies to game those numbers with methods that, if anything, slightly harm user experience.
The map is not the territory. The numbers are a proxy for user experience, but it’s notoriously difficult to measure intangible ideas like pain and frustration. As Laurie says:
This is 100% the downside of automatic tools that give you a “score”. It’s like gameification. It’s about hitting that perfect score instead of the holistic experience.
Google used its dominant position in the marketplace to force widespread adoption of a largely proprietary technology for creating websites. By switching to Core Web Vitals, those power dynamics haven’t materially changed.
(If you prefer using initialisms, remember that CFP is Certified Financial Planner, CLS is Community Legal Services, and FID is Flame Ionization Detector. Together they form CWV, Catholic War Veterans.)
Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.
So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:
Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.
The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:
My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.
That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.
This is an interesting time for AMP …whatever AMP is.
See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.
That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!
But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?
The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.
That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.
I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:
An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.
The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.
Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.
So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.
I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).
The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.
I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.
That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.
After all, AMP is a collection of web components. Maybe.
Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:
I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.
For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”
If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.
But I’ve never actually seen that happen in a software business.
A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.
It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.
The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.
Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:
We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
Why are the JavaScript dependencies so large?
We need all that JavaScript to create the single page app functionlity.
To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.
So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.
Except…
If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.
Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.
I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)
I linked to Jake’s excellent proposal in my shitpost saying:
bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers
Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).
That was based on my reading of the proposal:
…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.
It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.
But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).
Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.
After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.
But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.
Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.
we haven’t seen interest from SPA folks in portals so far.
I’m not surprised! He goes on:
Maybe, they are happy / benefits aren’t clear yet.
From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.
Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.
The latest episode of the Clearleft podcast is zipping through the RSS tubes towards your podcast-playing software of choice. This is episode five, the penultimate episode of this first season.
This time the topic is design maturity. Like the episode on design ops, this feels like a hefty topic where the word “scale” will inevitably come up.
I talked to my fellow Clearlefties Maite and Andy about their work on last year’s design effectiveness report. But to get the big-scale picture, I called up Aarron over at Invision.
What a great guest! I already had plans to get Aarron on the podcast to talk about his book, Designing For Emotion—possibly a topic for next season. But for the current episode, we didn’t even mention it. It was design maturity all the way.
I had a lot of fun editing the episode together. I decided to intersperse some samples. If you’re familiar with Bladerunner and Thunderbirds, you’ll recognise the audio.
The whole thing comes out at a nice 24 minutes in length.
I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.
But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”
But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”
Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.
To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.
So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.
CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.
There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.
Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:
font-size: clamp(1rem, 1.333vw, 1.5rem);
By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.
That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.
The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:
font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);
The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.
You could use a full 1rem in that default value:
font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);
…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:
Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:
(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)
So I’ve got the CSS rule I want. I dropped it in to the top of my file and…
I got an error.
There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).
Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.
The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.
Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.
I started to ask myself whether I should still be using Sass. I looked at which features I was using…
Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.
Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.
Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.
Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!
But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.
(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)
I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!
Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.
It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.
It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.
I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.
You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.
Do you have plans for the weekend of March 14th and 15th?
If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.
Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.
If you’re wondering whether this is for you, ask yourself if any of this situations apply:
You don’t have your own website yet, but you want one.
You have your own website, but you need some help with it.
You have some ideas about the independent web.
You have your own website but you never seem to find the time to update it.
You’d like to help other people with their websites.
If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!
I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.
Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.
Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.
But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.
So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.
I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.
Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.
Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:
It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.
In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).
Alas, no such response was forthcoming. The hoped-for schooling never forthcame.
Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:
Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.
Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?
These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.
I kept coming back to my original goal:
I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.
I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.
If you’d like to see the maps in action, click the “play” button on any of these maps:
You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.
First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).
It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.
For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.
I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.
After two hours, I admitted defeat.
I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.
Did you read through the eleven steps of instructions? Did you also think it was a piss take?
Take the initial signed value.
Multiply it by 1e5.
Convert that decimal value to binary.
Left-shift the binary value one bit.
If the original decimal value is negative, invert this encoding.
Break the binary value out into 5-bit chunks.
Place the 5-bit chunks into reverse order.
OR each value with 0x20 if another bit chunk follows.
Convert each value to decimal.
Add 63 to each value.
Convert each value to its ASCII equivalent.
This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.
At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.
It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.
That’s what I demoed at the end of the day.
But I wasn’t happy with it.
Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.
So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.
I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:
If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.
I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.
It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.
Your weekends are valuable. Spend them wisely. I have some suggestion on how you might spend next weekend, October 19th and 20th, depending on where you are in the world.
If you’re in the bay area, or anywhere near San Francisco, I highly recommend that you go to Science Hack Day—two days of science, hacking, and fun. This will be the last one in San Francisco so don’t miss your chance.
If you’re in the south of England, or anywhere near Brighton, come along to Indie Web Camp. Saturday will feature discussions on owning your data. Sunday will be a day of doing. I’ve written about previous Indie Web Camps before, and I really can’t recommend it highly enough!
Do me a favour and register for a spot—it’s free—so I’ve got some idea of numbers. Looking forward to seeing you there!
I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.
By the end of the “doing” day, I had something fun to demo—a dark mode for my website.
Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.
Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:
@media (prefers-color-scheme: dark) {
...
}
Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).
All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.
By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.
I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:
The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.
I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:
For the cut-out triangle in the top corner, there’s clip-path.
For the gradient, there’s …gradients!
background-image: linear-gradient(
to right,
transparent 50%,
var(—background-color) 100%
);
Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.
In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.
Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.
This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.
The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:
Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:
:root {
color-scheme: light dark;
}
But you can also do it in HTML:
That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.
Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:
So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.
If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.
The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.
I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.
I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.
The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.
Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.
I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!
Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.
I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.
I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.
After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).
And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.
Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.
I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…
The AMP format
Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!
But I cannot get behind AMP.
Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.
The AMP ecosystem
Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:
This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?
According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.
We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.
Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.
The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:
Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.
Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.
That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.
The AMP cache
If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!
That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.
If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.
Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.
When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.
Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.
But why host the pages on a Google domain? Why not prerender the original URLs?
The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.
But perhaps we could de-couple the AMP format from the AMP cache.
Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.
It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.
This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.
Here’s the problem…
Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.
And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.
Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.
A modest proposal
Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)
What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.
Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:
Hosting needs to be opt-in.
Only fast pages should be prerendered.
Opting in
Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.
This could be a meta element. Maybe something like:
<meta name="caches-allowed" content="google">
This would have the nice benefit of allowing comma-separated values:
(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)
If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.
Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.
Criteria for prerendering
Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.
Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:
Page Speed Index
Lighthouse
Web Page Test
Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.
Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.
Issues
This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.
In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.
What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.
This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.
Measuring
Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.
I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.
Assets
Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.
If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.
This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.
Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.
Summary
Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
Currently, that’s only done for pages using the AMP format.
The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.
There will be technical challenges, but hopefully nothing insurmountable.
I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).
I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.
I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)
What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?
I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”
I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”
And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.
We took the surprisingly busy train from Brighton to Southampton, with our plentiful luggage in tow. As well as the clothes we’d need for three weeks of hot summer locations in the United States, Jessica and I were also carrying our glad rags for the shipboard frou-frou evenings.
Once the train arrived in Southampton, we transferred our many bags into the back of a taxi and made our way to the terminal. It looked like all the docks were occupied, either with cargo ships, cruise ships, or—in the case of the Queen Mary 2—the world’s last ocean liner to be built.
Check in. Security. Then it was time to bid farewell to dry land as we boarded the ship. We settled into our room—excuse me, stateroom—on the eighth deck. That’s the deck that also has the lifeboats, but our balcony is handily positioned between two boats, giving us a nice clear view.
We’d be sailing in a few hours, so that gave us plenty of time to explore the ship. We grabbed a suprisingly tasty bite to eat in the buffet restaurant, and then went out on deck (the promenade deck is deck seven, just one deck below our room).
It was a blustery day. All weekend, the UK newspaper headlines had been full of dramatic stories of high winds. Not exactly sailing weather. But the Queen Mary 2 is solid, sturdy, and just downright big, so once we were underway, the wind was hardly noticable …indoors. Out on the deck, it could get pretty breezy.
By pure coincidence, we happened to be sailing on a fortuituous day: the meeting of the queens. The Queen Elizabeth, the Queen Victoria, and the Queen Mary 2 were all departing Southampton at the same time. It was a veritable Cunard convoy. With the yacht race on as well, it was a very busy afternoon in the Solent.
We stayed out on the deck as our ship powered out of Southampton, and around the Isle of Wight, passing a refurbished Palmerston sea fort on the way.
Alas, Jessica had a migraine brewing all day, so we weren’t in the mood to dive into any social activities. We had a low-key dinner from the buffet—again, surprisingly tasty—and retired for the evening.
Passenger’s log, day two: Monday, August 12, 2019
Jessica’s migraine passed like a fog bank in the night, and we woke to a bright, blustery day. The Queen Mary 2 was just passing the Scilly Isles, marking the traditional start of an Atlantic crossing.
Breakfast was blissfully quiet and chilled out—we elected to try the somewhat less-trafficked Carinthia lounge; the location of a decent espresso-based coffee (for a price). Then it was time to feed our minds.
We watched a talk on the Bolshoi Ballet, filled with shocking tales of scandal. Here I am on holiday, and I’m sitting watching a presentation as though I were at a conference. The presenter in me approved of some of the stylistic choices: tasteful transitions in Keynote, and suitably legible typography for on-screen quotes.
Soon after that, there was a question-and-answer session with a dance teacher from the English National Ballet. We balanced out the arts with some science by taking a trip to the planetarium, where the dulcet voice of Neil De Grasse Tyson told the tale of dark matter. A malfunctioning projector somewhat tainted the experience, leaving a segment of the dome unilliminated.
It was a full morning of activities, but after lunch, there was just one time and place that mattered: sign ups for the week’s ballet workshops would take place at 3pm on deck two. We wandered by at 2pm, and there was already a line! Jessica quickly took her place in the queue, hoping that she’d make into the workshops, which have a capacity of just 30 people. The line continued to grow. The Cunard staff were clearly not prepared for the level of interest in these ballet workshops. They quickly introduced some emergency measures: this line would only be for the next two day’s workshops, rather than the whole week. So there’d be more queueing later in the week for anyone looking to take more than one workshop.
Anyway, the most important outcome was that Jessica did manage to sign up for a workshop. After all that standing in line, Jessica was ready for a nice sit down so we headed to the area designated for crafters and knitters. As Jessica worked on the knitting project she had brought along, we had our first proper social interactions of the voyage, getting to know the other makers. There was much bonding over the shared love of the excellent Ravelry website.
Next up: a pub quiz at sea in a pub at sea. I ordered the flight of craft beers and we put our heads together for twenty quickfire trivia questions. We came third.
After that, we rested up for a while in our room, before donning our glad rags for the evening’s gala dinner. I bought a tuxedo just for this trip, and now it was time to put it into action. Jessica donned a ballgown. We both looked the part for the black-and-white themed evening.
We headed out for pre-dinner drinks in the ballroom, complete with big band. At one entrance, there was a receiving line to meet the captain. Having had enough of queueing for one day, we went in the other entrance. With glasses of sparkling wine in hand, we surveyed our fellow dressed-up guests who were looking in equal measure dashingly cool and slightly uncomfortable.
After some amusing words from the captain, it was time for dinner. Having missed the proper sit-down dinner the evening before, this was our first time finding out what table we had. We were bracing ourselves for an evening of being sociable, chit-chatting with whoever we’ve been seated with. Your table assignment was the same for the whole week, so you’d better get on well with your tablemates. If you’re stuck with a bunch of obnoxious Brexiteers, tough luck; you just have to suck it up. Much like Brexit.
We were shown to our table, which was …a table for two! Oh, the relief! Even better, we were sitting quite close to the table of ballet dancers. From our table, Jessica could creepily stalk them, and observe them behaving just like mere mortals.
We settled in for a thoroughly enjoyable meal. I opted for an array of pale-coloured foods; cullen skink, followed by seared scallops, accompanied by a Chablis Premier Cru. All this while wearing a bow tie, to the sounds of a string quartet. It felt like peak Titanic.
After dinner, we had a nightcap in the elegant Chart Room bar before calling it a night.
Passenger’s log, day three: Tuesday, August 13
We were woken early by the ship’s horn. This wasn’t the seven-short-and-one-long blast that would signal an emergency. This was more like the sustained booming of a foghorn. In fact, it effectively was a foghorn, because we were in fog.
Below us was the undersea mountain range of the Maxwell Fracture Zone. Outside was a thick Atlantic fog. And inside, we were nursing some slightly sore heads from the previous evening’s intake of wine.
But as a nice bonus, we had an extra hour of sleep. As long as the ship is sailing west, the clocks get put back by an hour every night. Slowly but surely, we’ll get on New York time. Sure beats jetlag.
After a slow start, we sautered downstairs for some breakfast and a decent coffee. Then, to blow out the cobwebs, we walked a circuit of the promenade deck, thereby swapping out bed head for deck head.
It was then time for Jessica and I to briefly part ways. She went to watch the ballet dancers in their morning practice. I went to a lecture by Charlie Barclay from the Royal Astronomical Society, and most edifying it was too (I wonder if I can convince him to come down to give a talk at Brighton Astro sometime?).
After the lecture was done, I tracked down Jessica in the theatre, where she was enraptured by the dancers doing their company class. We stayed there as it segued into the dancers doing a dress rehearsal for their upcoming performance. It was fascinating, not least because it was clear that the dancers were having to cope with being on a slightly swaying moving vessel. That got me wondering: has ballet ever been performed on a ship before? For all I know, it might have been a common entertainment back in the golden age of ocean liners.
We slipped out of the dress rehearsal when hunger got the better of us, and we managed to grab a late lunch right before the buffet closed. After that, we decided it was time to check out the dog kennels up on the twelfth deck. There are 24 dogs travelling on the ship. They are all good dogs. We met Dillinger, a good dog on his way to a new life in Vancouver. Poor Dillinger was struggling with the circumstances of the voyage. But it’s better than being in the cargo hold of an airplane.
While we were up there on the top of the ship, we took a walk around the observation deck right above the bridge. The wind made that quite a tricky perambulation.
The rest of our day was quite relaxed. We did the pub quiz again. We got exactly the same score as we did the day before. We had a nice dinner, although this time a tuxedo was not required (but a jacket still was). Lamb for me; beef for Jessica; a bottle of Gigondas for both of us.
After dinner, we retired to our room, putting our clocks and watches back an hour before climbing into bed.
Passenger’s log, day four: Wednesday, August 14, 2019
After a good night’s sleep, we were sauntering towards breakfast when a ship’s announcement was made. This is unusual. Ship’s announcements usually happen at noon, when the captain gives us an update on the journey and our position.
This announcement was dance-related. Contradicting the listed 5pm time, sign-ups for the next ballet workshops would be happening at 9am …which was in 10 minutes time. Registration was on deck two. There we were, examining the breakfast options on deck seven. Cue a frantic rush down the stairwells and across the ship, not helped by me confusing our relative position to fore and aft. But we made it. Jessica got in line, and she was able to register for the workshop she wanted. Crisis averted.
We made our way back up to breakfast, and our daily dose of decent coffee. Then it was time for a lecture that was equally fascinating for me and Jessica. It was Physics En Pointe by Dr. Merritt Moore, ballet dancer and quantum physicist. This was a scene-setting talk, with her describing her life’s journey so far. She’ll be giving more talks throughout the voyage, so I’m hoping for some juicy tales of quantum entanglement (she works in quantum optics, generating entangled photons).
After that, it was time for Jessica’s first workshop. It was a general ballet technique workshop, and they weren’t messing around. I sat off to the side, with a view out on the middle of the Atlantic ocean, tinkering with some code for The Session, while Jessica and the other students were put through their paces.
Then it was time to briefly part ways again. While Jessica went to watch the ballet dancers doing their company class, I was once again attending a lecture by Charles Barclay of the Royal Astronomical Society. This time it was archaeoastronomy …or maybe it was astroarcheology. Either way, it was about how astronomical knowledge was passed on in pre-writing cultures, with a particular emphasis on neolithic sites like Avebury.
When the lecture was done, I rejoined Jessica and we watched the dancers finish their company class. Then it was time for lunch. We ate from the buffet, but deliberately avoided the heavier items, opting for a relatively light salad and sushi combo. This good deed would later be completely undone with a late afternoon cake snack.
We went to one more lecture. Three in one day! It really is like being at a conference. This one, by John Cooper, was on the Elizabethan settlers of Roanoke Island. So in one day, I managed to get a dose of history, science, and culture.
With the day’s workshops and lectures done, it was once again time to put on our best garb for the evening’s gala dinner. All tux’d up, I escorted Jessica downstairs. Tonight was the premier of the ballet performance. But before that, we wandered around drinking champagne and looking fabulous. I even sat at an otherwise empty blackjack table and promptly lost some money. I was a rubbish gambler, but—and this is important—I was a rubbish gambler wearing a tuxedo.
We got good seats for the ballet and settled in for an hour’s entertainment. There were six pieces, mostly classical. Some Swan Lake, some Nutcracker, and some Le Corsaire. But there was also something more modern in there—a magnificent performance from Akram Khan’s Dust. We had been to see Dust at Sadlers Wells, but I had forgotten quite how powerful it is.
After the performance, we had a quick cocktail, and then dinner. The sommelier is getting chattier and chattier with us each evening. I think he approves of our wine choices. This time, we left the vineyards of France, opting for a Pinot Noir from Central Otago.
After one or two nightcaps, we went back to our cabin and before crashing out, we set our clocks back an hour.
Passenger’s log, day five: Thursday, August 15, 2019
We woke to another foggy morning. The Queen Mary 2 was now sailing through the shallower waters of the Grand Banks of Newfoundland. Closer and closer to North America.
This would be my fifth day with virtually no internet access. I could buy WiFi internet access at exorbitant satellite prices, but I hadn’t felt any need to do that. I could also get a maritime mobile phone signal—very slow and very expensive.
I’ve been keeping my phone in airplane mode. Once a day, I connect to the mobile network and check just one website— thesession.org—just to make sure nothing’s on fire there. Fortunately, because I made the site, I know that the data transfer will be minimal. Each page of HTML is between 30K and 90K. There are no images to speak of. And because I’ve got the site’s service worker installed on my phone, I know that CSS and JavaScript is coming straight from a cache.
I’m not missing Twitter. I’m certainly not missing email. The only thing that took some getting used to was not being able to look things up. On the first few days of the crossing, both Jessica and I found ourselves reaching for our phones to look up something about ships or ballet or history …only to remember that we were enveloped in a fog of analogue ignorance, with no sign of terra firma digitalis.
It makes the daily quiz quite challenging. Every morning, twenty questions are listed on sheets of paper that appear at the entrance to the library. This library, by the way, is the largest at sea. As Jessica noted, you can tell a lot about the on-board priorities when the ship’s library is larger than the ship’s casino.
Answers to the quiz are to be handed in by 4pm. In the event of a tie, the team who hands in their answers earliest wins. You’re not supposed to use the internet, but you are positively encouraged to look up answers in the library. Jessica and I have been enjoying this old-fashioned investigative challenge.
With breakfast done before 9am, we had a good hour to spend in the library researching answers to the day’s quiz before Jessica needed to be at her 10am ballet workshop. Jessica got started with the research, but I quickly nipped downstairs to grab a couple of tickets for the planetarium show later that day.
Tickets for the planetarium shows are released every morning at 9am. I sauntered downstairs and arrived at the designated ticket-release location a few minutes before nine, where I waited for someone to put the tickets out. When no tickets appeared five minutes after nine, I wasn’t too worried. But when there were still no tickets at ten past nine, I grew concerned. By quarter past nine, I was getting a bit miffed. Had someone forgotten their planetarium ticket duties?
I found a crewmember at a nearby desk and asked if anyone was going to put out planetarium tickets. No, I was told. The tickets all went shortly after 9am. But I’ve been here since before 9am, I said! Then it dawned on me. The ship’s clocks didn’t go back last night after all. We just assumed they did, and dutifully changed our watches and phones accordingly.
Oh, crap—Jessica’s workshop! I raced back up five decks to the library where Jessica was perusing reference books at her leisure. I told her the bad news. We dashed down to the workshop ballroom anyway, but of course the class was now well underway. After all the frantic dashing and patient queueing that Jessica did yesterday to scure her place on the workshop! Our plans for the day were undone by our being too habitual with our timepieces. No ballet workshop. No planetarium show. I felt like such an idiot.
Well, we still had a full day of activities. There was a talk with ballet dancer, James Streeter (during which we found out that the captain had deployed all the ships stabilisers during the previous evening’s performance). We once again watched the ballet dancers doing their company class for an hour and a half. We went for afternoon tea, complete with string quartet and beautiful view out on the ocean, now mercifully free of fog.
We attended another astronomy lecture, this time on eclipses. But right before the lecture was about to begin, there was a ship-wide announcement. It wasn’t midday, so this had to be something unusual. The captain informed us that a passenger was seriously ill, and the Canadian coastguard was going to attempt a rescue. The ship was diverting closer to Newfoundland to get in helicopter range. The helicopter wouldn’t be landing, but instead attempting a tricky airlift in about twenty minutes time. And so we were told to literally clear the decks. I assume the rescue was successful, and I hope the patient recovers.
After that exciting interlude, things returned to normal. The lecture on eclipses was great, focusing in particular on the magificent 2017 solar eclipse across America.
It’s funny—Jessica and I are on this crossing because it was a fortunate convergence of ballet and being on a ship. And in 2017 we were in Sun Valley, Idaho because of a fortunate convergence of ballet and experiencing a total eclipse of the sun.
I’m starting to sense a theme here.
Anyway, after all the day’s dancing and talks were done, we sat down to dinner, where Jessica could once again surreptitiously spy on the dancers at a nearby table. We cemented our bond with the sommelier by ordering a bottle of the excellent Lebanese Château Musar.
When we got back to our room, there was a note waiting for us. It was an invitation for Jessica to take part in the next day’s ballet workshop! And, looking at the schedule for the next day, there was going to be repeats of the planetarium shows we missed today. All’s well that ends well.
Before going to bed, we did not set our clocks back.
Watching the ballet dancers doing their company class.
Watching a rehearsal of the ballet performance.
The workshop was quite something. Jennie Harrington—who retired from dancing with Dust—took the 30 or so attendees through some of the moves from Akram Khan’s masterpiece. It looked great!
While all this was happening inside the ship, the weather outside was warming up. As we travel further south, the atmosphere is getting balmier. I spent an hour out on a deckchair, dozing and reading.
At one point, a large aircraft buzzed us—the Canadian coastguard perhaps? We can’t be that far from land. I think we’re still in international waters, but these waters have a Canadian accent.
After soaking up the salty sea air out on the bright deck, I entered the darkness of the planetarium, having successfully obtained tickets that morning by not having my watch on a different time to the rest of the ship.
That evening, there was a gala dinner with a 1920s theme. Jessica really looked the part—like a real flapper. I didn’t really make an effort. I just wore my tuxedo again. It was really fun wandering the ship and seeing all the ornate outfits, especially during the big band dance after dinner. I felt like I was in a photo on the wall of the Overlook Hotel.
Passenger’s log, day seven: Saturday, August 17, 2019
Today was the last full day of the voyage. Tomorrow we disembark.
We had a relaxed day, with the usual activities: a lecture or two; sitting in on the ballet company class.
Instead of getting a buffet lunch, we decided to do a sit-down lunch in the restaurant. That meant sitting at a table with other people, which could’ve been awkward, but turned out to be fine. But now that we’ve done the small talk, that’s probably all our social capital used up.
The main event today was always going to be the reprise and final performance from the English National Ballet. It was an afternoon performance this time. It was as good, if not better, the second time around. Bravo!
Best of all, after the performance, Jessica got to meet James Streeter and Erina Takahashi. Their performance from Dust was amazing, and we gushed with praise. They were very gracious and generous with their time. Needless to say, Jessica was very, very happy.
Shortly before the ballet performance, the captain made another unscheduled announcement. This time it was about a mechanical issue. There was a potential fault that needed to be investigated, which required stopping the ship for a while. Good news for the ballet dancers!
Jessica and I spent some time out on the deck while the ship was stopped. It’s was a lot warmer out there compared to just a day or two before. It was quite humid too—that’ll help us start to acclimatise for New York.
We could tell that we were getting closer to land. There are more ships on the horizon. From the amount of tankers we saw today, the ship must have passed close to a shipping lane.
We’re going to have a very early start tomorrow—although luckily the clocks will go back an hour again. So we did as much of our re-packing as we could this evening.
With the packing done, we still had some time to kill before dinner. We wandered over to the swanky Commodore Club cocktail bar at the fore of the ship. Our timing was perfect. There were two free seats positioned right by a window looking out onto the beautiful sunset we were sailing towards. The combination of ocean waves, gorgeous sunset, and very nice drinks ensured we were very relaxed when we made our way down to dinner.
At the entrance of the dining hall—and at the entrance of any food-bearing establishment on board—there are automatic hand sanitiser dispensers. And just in case the automated solution isn’t enough, there’s also a person standing there with a bottle of hand sanitiser, catching your eye and just daring you to refuse an anti-bacterial benediction. As the line of smartly dressed guests enters the restaurant, this dutiful dispenser of cleanliness anoints the hands of each one; a priest of hygiene delivering a slightly sticky sacrament.
The paranoia is justified. A ship is a potential petri dish at sea. In my hometown of Cobh in Ireland, the old cemetery is filled with the bodies of foreign sailors whose ships were quarantined in the harbour at the first sign of cholera or smallpox. While those diseases aren’t likely to show up on the Queen Mary 2, if norovirus were to break out on the ship, it could potentially spread quickly. Hence the war on hand-based microbes.
Maybe it’s because I’ve just finished reading Ed Yong’s excellent book I contain multitudes, but I can’t help but wonder about our microbiomes on board this ship. Given enough time, would the microbiomes of the passengers begin to sync up? Maybe on a longer voyage, but this crossing almost certainly doesn’t afford enough time for gut synchronisation. This crossing is almost done.
Passenger’s log, day eight: Sunday, August 18, 2019
Jessica and I got up at 4:15am. This is an extremely unusual occurance for us. But we were about to experience something very out of the ordinary.
We dressed, looked unsuccessfully for coffee, and made our way on to the observation deck at the top of the ship. Land ho! The lights of New Jersey were shining off the port side of the ship. The lights of long island were shining off the starboard side. And dead ahead was the string of lights marking the Verrazano-Narrows Bridge.
The Queen Mary 2 was deliberately designed to pass under this bridge …just. The bridge has a clearance of 228 feet. The Queen Mary 2 is 236.2 feet, keel to funnel. That’s a difference of just 8.2 feet. Believe me, that doesn’t look like much when you’re on the top deck of the ship, standing right by the tallest mast.
The distant glow of New York was matched by the more localised glow of mobile phone screens on the deck. Passengers took photos constantly. Sometimes they took photos with flash, demonstrating a fundamental misunderstanding of how you photograph distant objects.
The distant object that everyone was taking pictures of was getting less and less distant. The Statue of Liberty was coming up on our port side.
I probably should’ve felt more of a stirring at the sight of this iconic harbour sculpture. The familiarity of its image might have dulled my appreciation. But not far from the statue was a dark area, one of the few pieces of land without lights. This was Ellis Island. If the Statue of Liberty was a symbol of welcome for your tired, your poor, your huddled masses yearning to breathe free, then Ellis Island was where the immigration rubber met the administrative road. This was where countless Irish migrants first entered the United States of America, bringing with them their songs, their stories, and their unhealthy appreciation for potatoes.
Before long, the sun was rising and the Queen Mary 2 was parallel parking at the Red Hook terminal in Brooklyn. We went back belowdecks and gathered our bags from our room. Rather than avail of baggage assistance—which would require us to wait a few hours before disembarking—we opted for “self help” dismembarkation. Shortly after 7am, our time on board the Queen Mary 2 was at an end. We were in the first group of passengers off the ship, and we sailed through customs and immigration.
Within moments of being back on dry land, we were in a cab heading for our hotel in Tribeca. The cab driver took us over the Brooklyn Bridge, explaining along the way how a cash payment would really be better for everyone in this arrangement. I didn’t have many American dollars, but after a bit of currency haggling, we agreed that I could give him the last of the Canadian dollars I had in my wallet from my recent trip to Vancouver. He’s got family in Canada, so this is a win-win situation.
It being a Sunday morning, there was no traffic to speak of. We were at our hotel in no time. I assumed we wouldn’t be able to check in for hours, but at least we’d be able to leave our bags there. I was pleasantly surprised when I was told that they had a room available! We checked in, dropped our bags, and promptly went in search of coffee and breakfast. We were tired, sure, but we had no jetlag. That felt good.
I connected to the hotel’s WiFi and went online for the first time in eight days. I had a lot of spam to delete, mostly about cryptocurrencies. I was back in the 21st century.
After a week at sea, where the empty horizon was visible in all directions, I was now in a teeming mass of human habitation where distant horizons are rare indeed. After New York, I’ll be heading to Saint Augustine in Florida, then Chicago, and finally Boston. My arrival into Manhattan marks the beginning of this two week American odyssey. But this also marks the end of my voyage from Southampton to New York, and with it, this passenger’s log.