
Revisiting @wordridden’s old dorm at Mount Holyoke (where I was a stowaway for three months 23 years ago).
5th | 10th | 15th | 20th | 25th | 30th | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | |||||||||||||||||||||||||||||||
4am | |||||||||||||||||||||||||||||||
8am | |||||||||||||||||||||||||||||||
12pm | |||||||||||||||||||||||||||||||
4pm | |||||||||||||||||||||||||||||||
8pm |
Revisiting @wordridden’s old dorm at Mount Holyoke (where I was a stowaway for three months 23 years ago).
This is why we need an nth-letter
selector in CSS .
Automatically generates icons and splash screens based on Web App Manifest specs and Apple Human Interface Guidelines. Updates manifest.json and index.html files with the generated images.
A handy command line tool. Though be aware that it will generate the shit-ton of link
elements for splash screens that Apple demands you provide for a multitude of different screen sizes.
If you treat data as a constraint in your design and development process, you’ll likely be able to brainstorm a large number of different ways to keep data usage to a minimum while still providing an excellent experience. Doing less doesn’t mean it has to feel broken.
Getting ready to go on a road trip with @wordridden, @drinkerthinker and @beep.
I’ll take “Horse and music” for 200 please, Alex.
Disallowing is one option, but another could be to rewrite script elements to be inert during preload and activated on actual load.
Going to Boston. brb
A way for you to comment (anonymously, if you wish) on any post that accepts webmentions. So you can use this to respond to posts on adactio.com if you want.
Back in the Green Mill. Last time I was here, @adrianholovaty was playing.
Beach reading.
Checked in at The Art Institute of Chicago. with Jessica
Checked in at Mrs Murphy and Sons Irish Bistro. Session — with Jessica
I did some liveblogging of the second day of An Event Apart Chicago #aeachi:
It’s Patty Toland’s first time at An Event Apart! She’s from the fantabulous Filament Group. They’re dedicated to making the web work for everyone.
A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.
At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.
A month later there was a lot of coverage of Syrian refugees arriving in Europe. People were outraged to see refugees carrying smartphones as though that somehow showed they weren’t in a desperate situation. But smartphones are absolutely a necessity in that situation, and most of the phones were less expensive, lower-end devices. Refugeeinfo.eu was a useful site for people in crisis, but the navigation was designed to require JavaScript.
When people thing about mobile, they think about freedom and mobility. But with that JavaScript decision, the developers piled baggage on to the users.
There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.
Pew Research provided a lot of data back then that showed that this thinking was wrong. Use of cell phones, especially smartphones and tablets, escalated dramatically in the United States. There was a trend towards mobile-only usage. This was in low-income households—about one third of the population. Among 5,400 panelists, 15% did not have a JavaScript-enabled device.
Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.
There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.
At most 37% of people have unlimited data. Most people run out of data on a frequent basis.
Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.
This shows that network issues are definitely not just a third world challenge.
On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.
Are we helping in or dumping in? If you look at the data for the average web page size (2 megabytes), we are definitely dumping in. The size of third-party JavaScript has octupled.
There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.
Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.
In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.
What should we be doing?
Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.
Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.
Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.
Test on real devices that real people are using. See Alex Russell’s data on the differences between the kind of devices we use and typical low-end devices. We literally need to stop people in JavaScript.
Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.
If you have some power, then your job is to empower somebody else.
—Toni Morrison
Jason is on stage at An Event Apart Chicago in a tuxedo. He wants to talk about how we can make web forms magical. Oh, I see. That explains the get-up.
We’re always being told to make web forms shorter. Luke Wroblewski has highlighted the work of companies that have reduced form fields and increased conversion.
But what if we could get rid of forms altogether? Wouldn’t that be magical!
Jason will reveal the secrets to this magic. But first—a volunteer from the audience, please! Please welcome Joe to the stage.
Joe will now log in on a phone. He types in the username. Then the password. The password is hodge-podge of special characters, numbers and upper and lowercase letters. Joe starts typing. Jason takes the phone and logs in without typing anything!
The secret: Jason was holding an NFC security key in his hand. That works with a new web standard called WebAuthn.
Passwords are terrible. People share them across sites, but who can blame them? It’s hard to remember lots of passwords. The only people who love usernames and passwords are hackers. So sites are developing other methods to try to keep people secure. Two factor authentication helps, although it doesn’t help us with phishing attacks. The hacker gets the password from the phished user …and then gets the one-time code from the phished user too.
But a physical device like a security key solves this problem. So why aren’t we all using security keys (apart from the fear of losing the key)? Well, until WebAuthn, there wasn’t a way for websites to use the keys.
A web server generates a challenge—a long string—that gets sent to a website and passed along to the user. The user’s device generates a credential ID and public and private keys for that domain. The web site stores the public key and credential ID. From then on, the credential ID is used by the website in challenges to users logging in.
There were three common ways that we historically proved who we claimed to be.
These are factors of identification. So two-factor identification is the combination of any of those two. If you use a security key combined with a fingerprint scanner, there’s no need for passwords.
The browser support for the web authentication API (WebAuthn) is a bit patchy right now but you can start playing around with it.
There are a few other options for making logging in faster. There’s the Credential Management API. It allows someone to access passwords stored in their browser’s password manager. But even though it’s newer, there’s actually better browser support for WebAuthn than Credential Management.
Then there’s federated login, or social login. Jason has concerns about handing over log-in to a company like Facebook, Twitter, or Google, but then again, it means fewer passwords. As a site owner, there’s actually a lot of value in not storing log-in information—you won’t be accountable for data breaches. The problem is that you’ve got to decide which providers you’re going to support.
Also keep third-party password managers in mind. These tools—like 1Password—are great. In iOS they’re now nicely integrated at the operating system level, meaning Safari can use them. Finally it’s possible to log in to websites easily on a phone …until you encounter a website that prevents you logging in this way. Some websites get far too clever about detecting autofilled passwords.
Time for another volunteer from the audience. This is Tyler. Tyler will help Jason with a simple checkout form. Shipping information, credit card information, and so on. Jason will fill out this form blindfolded. Tyler will first verify that the dark goggles that Jason will be wearing don’t allow him to see the phone screen. Jason will put the goggles on and Tyler will hand him the phone with the checkout screen open.
Jason dons the goggles. Tyler hands him the phone. Jason does something. The form is filled in and submitted!
What was the secret? The goggles prevented Jason from seeing the phone …but they didn’t prevent the screen from seeing Jason. The goggles block everything but infrared. The iPhone uses infrared for Face ID. So the iPhone, it just looked like Jason was wearing funky sunglasses. Face ID then triggered the Payment Request API.
The Payment Request API allows us to use various payment methods that are built in to the operating system, but without having to make separate implementations for each payment method. The site calls the Payment Request API if it’s supported (use feature detection and progressive enhancement), then trigger the payment UI in the browser. The browser—not the website!—then makes a call to the payment processing provider e.g. Stripe.
E-commerce sites using the Payment Request API have seen a big drop in abandonment and a big increase in completed payments. The browser support is pretty good, especially on mobile. And remember, you can use it as a progressive enhancement. It’s kind of weird that we don’t encounter it more often—it’s been around for a few years now.
Jason read the fine print for Apple Pay, Google Pay, Microsoft Pay, and Samsung Pay. It doesn’t like there’s anything onerous in there that would stop you using them.
On some phones, you can now scan credit cards using the camera. This is built in to the operating system so as a site owner, you’ve just got to make sure not to break it. It’s really an extension of autofill. You should know what values the autocomplete
attribute can take. There are 48 different values; it’s not just for checkouts. When users use autofill, they fill out forms 30% faster. So make sure you don’t put obstacles in the way of autofill in your forms.
Jason proceeds to relate a long and involved story about buying burritos online from Chipotle. The upshot is: use the autocomplete
, type
, maxlength
, and pattern
attributes correctly on input
elements. Test autofill with your forms. Make it part of your QA process.
So, to summarise, here’s how you make your forms disappear:
Any sufficiently advanced technology is indistinguishable from magic.
—Arthur C. Clarke’s Third Law
Don’t our users deserve magical experiences?
Thanks for doing that—much appreciated!
Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…
Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.
People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?
Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.
Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?
In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.
In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.
What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.
Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.
Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.
There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.
How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.
You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.
You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.
Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.
Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.
Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.
Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.
Here’s the general idea with “artificial intelligence”…
There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”
That seems simple enough, right? What’s so hard about designing for voice?
Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.
How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.
Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.
Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.
Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.
Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.
Flow diagrams communicate customer states, but don’t use the actual text in them.
Prompt lists are your final deliverable.
Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.
If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).
Eventually voice design will become a core competency, much like mobile, which was once separate.
Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.
May the voice be with you!
I would love to! …But it might be very tricky to make it work as I’m speaking at An Event Apart in San Francisco right before that: https://aneventapart.com/event/san-francisco-2019
Drop me a line—jeremy at adactio dot com.
The brilliant Cyd Harrell is opening up day two of An Event Apart in Chicago. I’m going to attempt to liveblog her talk on making research count…
Research gets done …and then sits in a report, gathering dust.
Research matters. But how do we make it count? We need allies. Maybe we need more money. Perhaps we need more participation from people not on the product team.
If you’re doing real research on a schedule, sharing it on a regular basis, making people’s eyes light up …then you’ve won!
Research counts when it answers questions that people care about. But you probably don’t want to directly ask “Hey, what questions do you want answered?”
Research can explain oddities in analytics weird feedback from customers, unexpected uses of products, and strange hunches (not just your own).
Curious people with power are the most useful ones to influence. Not just hierarchical power. Engineers often have a lot of power. So ask, “Who is the most curious engineer, and how can I drag them out on a research session with me?”
At 18F, Cyd found that a lot of the nodes of power were in the mid level of the organisation who had been there a while—they know a lot of people up and down the chain. If you can get one of those people excited about research, they can spread it.
Open up your practice. Demystify it. Put as much effort into communicating as into practicing. Create opportunities for people to ask questions and learn.
You can think about communities of practice in the obvious way: people who do similar things to us, and other people who make design decisions. But really, everyone in the organisation is affected by design decisions.
Cyd likes to do office hours. People can come by and ask questions. You could open a Slack channel. You can run brown bag lunches to train people in basic user research techniques. In more conventional organisations, a newsletter is a surprisingly effective tool for sharing the latest findings from research. And use your walls to show work in progress.
Research counts when people can see it for themselves—not just when it’s reported from afar. Ask yourself: who in your organisation is disconnected from their user? It’s difficult for people to maintain their motivation in that position.
When someone has been in the field with you, the data doesn’t have to be explained.
Whoever’s curious. Whoever’s disconnected. Invite them along. Show them what you’re doing.
Think about the qualities of a good invitation (for a party, say). Make the rules clear. Make sure they want to come back. Design the experience of observing research. Make sure everyone has tools. Give everyone a responsibility. Be like Willy Wonka—he gave clear rules to the invitied guests. And sure, things didn’t go great when people broke the rules, but at the end, everyone still went home with the truckload of chocolate they were promised.
People who get to ask a question buy in to the results. Those people feel a sense of ownership for the research.
Research counts when methods fit the question. Think about what the right question is and how you might go about answering it.
You can mix your methods. Interviews. Diary studies. Card sorting. Shadowing. You can ground the user research in competitor analysis.
Back in 2008, Cyd was contacted by a company who wanted to know: how do people really use phones in their cars? Cyd’s team would ride along with people, interviewing them, observing them, taking pictures and video.
Later at the federal government, Cyd was asked: what are the best practices for government digital transformation? How to answer that? It’s so broad! Interviews? Who knows what?
They refined the question: what makes modern digital practices stick within a government entity? They looked at what worked when companies were going online, so see if there was anything that government could learn from. Then they created a set of really focused interview questions. What does digital transformation mean? How do you know when you’re done? What are the biggest obstacles to this work? How do you make changes last?
They used atechnique called cluster recruiting to figure out who else to talk to (by asking participants who else they should be talking to).
There is no one research method that will always work for you. Cutting the right corners at the right time lets you be fast and cheap. Cyd’s bare-bones research kit costs about $20: a notebook, a pen, a consent form, and the price of a cup of coffee. She also created a quick score sheet for when she’s not in a position to have research transcribed.
Always label your assumptions before beginning your research. Maybe you’re assuming that something is a frustrating experience that needs fixing, but it might emerge that it doesn’t need fixing—great! You’ve just saved a whole lotta money.
Research counts when researchers tell the story well. Synthesis works best as a conversational practice. It’s hard to do by yourself. You start telling stories when you come back from the field (sometimes it starts when you’re still out in the field, talking about the most interesting observations).
Miller’s Law is a great conceptual framework:
To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of.
You’re probably familiar with the “five whys”. What about the “five ways”? If people talk about something five different ways, it’s virtually certain that one of them will be an apt metaphor. So ask “Can you say that in a different way?” five time.
Spend as much time on communicating outcomes as you did on executing the work.
After research, play back how many people you spoke to, the most valuable insight you gained, the themes that are emerging. Describe the question you wanted to answer, what answers you got, and what you’re going to do next. If you’re in an organisation that values memos, write a memo. Or you could make a video. Or you could write directly into backlog tickets. And don’t forget the wall work! GDS have wonderfully full walls in their research department.
In the end, the best tool for research is an illuminating story.
Cyd was doing research at the Bakersfield courthouse. The hypothesis was that a lot of people weren’t engaging with technology in the court system. She approached a man named Manuel who was positively quaking. He was going through a custody battle. He said, “I don’t know technology but it doesn’t scare me. I’m shaking because this paperwork just gets to me—it’s terrifying.” He said who would gladly pay for someone to help him with the paperwork. Cyd wrote a report on this story. Months later, they heard people in the organisation asking questions like “How would this help Manuel?”
Sometimes you do have to fight (nicely).
People will push back on the time spent on research—they’ll say it doesn’t fit the sprint plan. You can have a three day research plan. Day 1: write scripts. Day 2: go to the users and talk to them. Day 3: play it back. People on a project spend more time than that in Slack.
People will say you can’t talk to the customers. In that situation, you could talk to people who are in the same sector as your company’s customers.
People will question the return on investment for research. Do it cheaply and show the very low costs. Then people stop talking about the money and start talking about the results.
People will claim that qualitative user research is not statistically significant. That’s true. But research is something else. It answers different question.
People will question whether a senior person needs to be involved. It is not fair to ask the intern to do all the work involved in research.
People will say you can’t always do research. But Cyd firmly believes that there’s always room for some research.
One of the best things you can do is be there, non-judgementally, making friends. It takes time, but it works. Research is like a dandelion in flight. Once it’s out and about, taking root, the more that research counts.
Maciej goes marching.
The protests are intentionally decentralized, using a jury-rigged combination of a popular message board, the group chat app Telegram, and in-person huddles at the protests.
This sounds like it shouldn’t possibly work, but the protesters are too young to know that it can’t work, so it works.
A nice standalone tool for picking colours out of photos, and generating a colour palette from the same photo.
Is there a URL for where the discussion around “speed assessment via metrics” is happening?
Is there a timeline on when we can expect to see non-AMP pages (with web packaging) getting the same preferential treatment as AMP pages in search?
Oh, yes, I wasn’t suggesting that any pages are ready to be hosted and pre-rendered as they are today—there would certainly need to be some rejiggng done, either by the author or the host.
Yes, I linked to that in my post.
But this wouldn’t be “arbitrary” content—there would be strict criteria for admission.
It feels like there’s a big spectrum between “arbitrary” and “only AMP”.
AMP is burning the village in order to save it. It’s far too extreme a step to rebuild everything in a new format.
@cramforce Here’s my proposal for what you requested—https://twitter.com/cramforce/status/1165084023359602688—for tackling privacy and performance for prerendering non-AMP pages:
I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…
Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!
But I cannot get behind AMP.
Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.
Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:
This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?
See also: Google AMP lowered our page speed, and there’s no choice but to use it:
According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.
Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:
We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.
Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.
The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:
Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.
From A letter about Google AMP:
Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.
That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.
If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!
That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.
If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.
Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.
Here’s that AMP letter again:
When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.
Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.
But why host the pages on a Google domain? Why not prerender the original URLs?
Scott summed up the situation with AMP nicely:
The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.
But perhaps we could de-couple the AMP format from the AMP cache.
That’s what Terence suggests:
My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.
Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.
It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.
This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.
Here’s the problem…
Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.
And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.
Still, it’s a real shame to miss out on the speed benefit of prerendering:
Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.
Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)
What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.
Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:
Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.
This could be a meta
element. Maybe something like:
<meta name="caches-allowed" content="google">
This would have the nice benefit of allowing comma-separated values:
<meta name="caches-allowed" content="google, yandex">
(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)
If not a meta
element, then perhaps this could be part of robots.txt
? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.
Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.
Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.
Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:
Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.
That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:
Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.
This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.
In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.
What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.
This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.
Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.
I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.
Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img
elements on the page: authors must use amp-img
instead. The image itself isn’t loaded until the user is on the page.
If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.
This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.
Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.
There will be technical challenges, but hopefully nothing insurmountable.
I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).
I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.
I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)
What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?
I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”
I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”
And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.
Google, open up the AMP cache.
Down by the river.
I think the bouzouki brings out the best in Gary Numan:
Checked in at Chicago Brewhouse. Chicago dog! — with Jessica
Hello, Chicago!
Going to Chicago. brb
I would very much like this to become a reality.
Never-Slow Mode (“NSM”) is a mode that sites can opt-into via HTTP header. For these sites, the browser imposes per-interaction resource limits, giving users a better user experience, potentially at the cost of extra developer work. We believe users are happier and more engaged on fast sites, and NSM attempts to make it easier for sites to guarantee speed to users. In addition to user experience benefits, sites might want to opt in because browsers could providing UI to users to indicate they are in “fast mode” (a TLS lock icon but for speed).
I’m dubious of this proposal. I just don’t think a nice bag can offer the kind of companionship you get from an animal.
The web embodies principles of openness and portability and access that best align with the needs, and frankly the purpose, of the cultural heritage sector.
Aaron’s talk from the 2019 Museums and the Web conference.
In 2019 the web is not “sexy” anymore and compared to native platforms it can sometimes seems lacking, but I think that speaks as much to people’s desire for something “new” as it does to any apples to apples comparison. On measure – and that’s the important part: on measure – the web affords a better and more sustainable framework for the cultural heritage to work in than any of the shifting agendas of the various platform vendors.
❤️
Editing is hard because you realize how bad you are. But editing is easy because we’re all better at criticizing than we are at creating.
Relatable:
My essay was garbage. But it was my garbage.
This essay is most definitely not garbage. I like it very much.
Reinventing the web the long way around, in a way that gives Google even more control of it. No thanks.
Reading Skyfaring by Mark Vanhoenacker.
If only Google Search had a similar model.
As it stands, the AMP team gets to offload the unfairness of the format’s privileged position onto the search team, while maintaining the appearance of open source and good governance for AMP.
I’m getting some very mixed signals here:
You won’t recommend me for a Product Manager job at Google?
My dreams lie in tatters before me.
How about a position on the AMP advisory committee? I’m hoping fascists like me aren’t barred.
Never once did I claim that using AMP would result in fewer clicks for publishers.
It’s exactly because AMP is guaranteed to work this way—while nothing else will—that makes the situation unfair.
Publishers have no choice but to use AMP.
But only for AMP pages.
No other framework gets this treatment.
No other web pages—no matter how fast—get this treatment.
Any wonder that developers are resentful of AMP?
You are not alone:
https://baymard.com/blog/deemphasize-install-app-ads
During our testing “Install App” banners were the direct and sole cause of several abandonments of some of the world’s largest e-commerce websites.
You’ll notice that I never once said that AMP is a ranking factor.
I said AMP is unfairly privileged in Google search results …because AMP is unfairly privileged in Google search results.
That’s a real shame for AMP-the-format.
AMP as often cited is not a ranking factor
This is the kind of semantic chicanery I’m talking about.
AMP pages are pre-rendered by Google. Regular (fast) web pages are not.
The Top Stories carousel appears above other results.
Publisher value enabled by AMP?
Publishers are using AMP because they have to, not because they want to.
If AMP weren’t treated differently to other frameworks, we could assess its true value.
Also: holding this position apparently makes me a fascist.
(still shaking my head over that one)
To understand my position, start with the position “one framework shouldn’t be unfairly privileged in search results” and then extrapolate from there. That is the fundamental disagreement.
(I think I’ve made that pretty clear, right?)
Beach scenes.
Florid.
According to Godwin’s Law, the discussion stopped here:
Hint away! …But also, please charge your phone’s battery.
Use a toggle switch if you are:
- Applying a system state, not a contextual one
- Presenting binary options, not opposing ones
- Activating a state, not performing an action
Brendan describes the software he’s using to get away from Adobe’s mafia business model.
Ooh! Colour me intrigued!
Gonna blog about it?
I want to go to there.
By the way, Charles, the one comfort I can take about AMP is that every passing day brings us closer to its 1,459 day probability point.
https://www.theguardian.com/technology/2013/mar/22/google-keep-services-closed
Here’s what I just got on web.dev …but no fireworks.
One more point and there’ll be fireworks: https://adactio.com/notes/15620
Related: here’s what I’ve been reading, documented on my website:
So, just to be clear, while you might be capable of the mental gymnastics required to think “Well, leaving aside the unfairness of the SEO situation with AMP…”, I cannot do that.
I wish AMP would compete on its own merits.
Do it.
Please.
Me too! I would love to get behind AMP—a declarative framework where configuration happens in HTML rather than JavaScript: great!
But I cannot in good conscience support it while it is being unfairly prioritised and propped up in search.
It is not orthongonal as long as AMP is being privileged in search. This isn’t something you can just handwave away. The unfairness of it actively harms AMP-as-framework.
Also YES
This is a really interesting distinction:
An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.
An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).
It’s certainly true that intentional design systems will invariably bake in a number of (unproven?) assumptions.
I have no problems with AMP, the open source format (accessibility issues notwithstanding).
I have no problems with AMP’s governance model.
I have serious problems with AMP’s privileged position in Google Search. It’s an abuse of power.
Agreed! Maintaining one site is nicer than two.
And yet publishers with already-fast sites (like The Guardian) are compelled to make AMP versions for the search benefits.
That’s not a side point—it is THE point!
I like it!
How Robin really feels about Google AMP:
Here’s my hot take on this: fuck the algorithm, fuck the impressions, and fuck the king. I would rather trade those benefits and burn my website to the ground than be under the boot and heel and of some giant, uncaring corporation.
Harry enumerates the reasons why client-side A/B testing is terrible:
- It typically blocks rendering.
- Providers are almost always off-site.
- It happens on every page load.
- No user-benefitting reuse.
- They likely skip any governance process.
While your engineers are subject to linting, code-reviews, tests, auditors, and more, your marketing team have free rein of the front-end.
Note that the problem here is not A/B testing per se, it’s client-side A/B testing. For some reason, we seem to have collectively decided that A/B testing—like analytics—is something we should offload to the JavaScript parser in the user’s browser.
- Obey the Law of Locality
- ABD: Anything But Dropdowns
- Pass the Squint Test
- Teach by example
Yup! See you soon!
Here you go: https://adactio.com/about/speaking/
If, in the final 7,000 years of their reign, dinosaurs became hyperintelligent, built a civilization, started asteroid mining, and did so for centuries before forgetting to carry the one on an orbital calculation, thereby sending that famous valedictory six-mile space rock hurtling senselessly toward the Earth themselves—it would be virtually impossible to tell.
A nice steaming cup of perspective.
If there were a nuclear holocaust in the Triassic, among warring prosauropods, we wouldn’t know about it.
…but if you use any framework other than AMP, you don’t get any of the Google Search benefits that are only bestowed on sites “choosing” to use AMP.
Hardly seems fair.
Yowza!
Far from questioning AMP’s right to exist, I want it to exist and compete on a level playing field—without being propped up by an unfair advantage in search results.
Classic fascism, that.
Walking on the beach at sunset.
You don’t get rewarded by Google Search for building pages with Bootstrap or React.
In.
This.
Context.
Regular web pages don’t require loading a specific JavaScript file.
When I say “Google is now strongly encouraging publishers to only publish AMP”, I mean AMP pages without a corresponding regular web page.
Cue semantic chicanery about AMP being regular web pages (spoiler: they are not).
Paired AMP was never meant to be the end state. (That’s why we’re now calling it Transitional mode in AMP for WordPress).
The test results are in:
During our testing “Install App” banners were the direct and sole cause of several abandonments of some of the world’s largest e-commerce websites.
Read on for details…
If you want to use Brad’s Atomic Design naming convention—atoms, molecules, etc.—and you like using Fractal for making your components, this starter kit is just for you:
Keep what you need, delete what you don’t and add whatever you like on top of whats already there.
I had a chat with Vitaly for half an hour about all things webby. It was fun!
I would absolutely love it if AMP were competing on a level playing field with the likes of Facebook Instant Articles and Apple News …but with Google’s SEO strong-arming tactics, the playing field is far from level.
Also, neither Apple or Facebook have a monopoly in search—Google does. Google is abusing that monopoly to get publishers to publish AMP …or suffer the SEO consequences.
Except that Apple never suggested that publishers should switch their main site over to the Apple News Format. Google is now strongly encouraging publishers to only publish AMP.
This is an excellent UX improvement in Chrome. For sites like The Session, where page loads are blazingly fast, this really makes them feel like single page apps.
Our goal with this work was for navigations in Chrome between two pages that are of the same origin to be seamless and thus deliver a fast default navigation experience with no flashes of white/solid-color background between old and new content.
This is exactly the kind of area where browsers can innovate and compete on the UX of the browser itself, rather than trying to compete on proprietary additions to what’s being rendered.
The best time to make a personal website is 20 years ago. The second best time to make a personal website is now.
Chris offers some illustrated advice:
- Define the purpose of your site
- Organize your content
- Look for inspiration
- Own your own domain name
- Build your website
Wetlands.
I would say Falcor from Neverending Story, the big flying dog.
Relaxing on the beach.
I’ll miss beaches when they’re gone.
Materials: https://resilientwebdesign.com/chapter2/
Material 2020: https://material.is/2020/
Performance and accessibility aren’t features that can linger at the bottom of a Jira board to be considered later when it’s convenient.
Instead we must start to see inaccessible and slow websites for what they are: a form of cruelty. And if we want to build a web that is truly a World Wide Web, a place for all and everyone, a web that is accessible and fast for as many people as possible, and one that will outlive us all, then first we must make our websites something else altogether; we must make them kind.
I wrote five megawords while crossing the Atlantic ocean:
Going to Saint Augustine, Florida. brb
I worry that the list of experience values might be contradictory e.g. “considerate” and “empowering” in this context:
Checked in at American Museum of Natural History. Getting a behind-the-scenes tour — with Jessica
We took the surprisingly busy train from Brighton to Southampton, with our plentiful luggage in tow. As well as the clothes we’d need for three weeks of hot summer locations in the United States, Jessica and I were also carrying our glad rags for the shipboard frou-frou evenings.
Once the train arrived in Southampton, we transferred our many bags into the back of a taxi and made our way to the terminal. It looked like all the docks were occupied, either with cargo ships, cruise ships, or—in the case of the Queen Mary 2—the world’s last ocean liner to be built.
Check in. Security. Then it was time to bid farewell to dry land as we boarded the ship. We settled into our room—excuse me, stateroom—on the eighth deck. That’s the deck that also has the lifeboats, but our balcony is handily positioned between two boats, giving us a nice clear view.
We’d be sailing in a few hours, so that gave us plenty of time to explore the ship. We grabbed a suprisingly tasty bite to eat in the buffet restaurant, and then went out on deck (the promenade deck is deck seven, just one deck below our room).
It was a blustery day. All weekend, the UK newspaper headlines had been full of dramatic stories of high winds. Not exactly sailing weather. But the Queen Mary 2 is solid, sturdy, and just downright big, so once we were underway, the wind was hardly noticable …indoors. Out on the deck, it could get pretty breezy.
By pure coincidence, we happened to be sailing on a fortuituous day: the meeting of the queens. The Queen Elizabeth, the Queen Victoria, and the Queen Mary 2 were all departing Southampton at the same time. It was a veritable Cunard convoy. With the yacht race on as well, it was a very busy afternoon in the Solent.
We stayed out on the deck as our ship powered out of Southampton, and around the Isle of Wight, passing a refurbished Palmerston sea fort on the way.
Alas, Jessica had a migraine brewing all day, so we weren’t in the mood to dive into any social activities. We had a low-key dinner from the buffet—again, surprisingly tasty—and retired for the evening.
Jessica’s migraine passed like a fog bank in the night, and we woke to a bright, blustery day. The Queen Mary 2 was just passing the Scilly Isles, marking the traditional start of an Atlantic crossing.
Breakfast was blissfully quiet and chilled out—we elected to try the somewhat less-trafficked Carinthia lounge; the location of a decent espresso-based coffee (for a price). Then it was time to feed our minds.
We watched a talk on the Bolshoi Ballet, filled with shocking tales of scandal. Here I am on holiday, and I’m sitting watching a presentation as though I were at a conference. The presenter in me approved of some of the stylistic choices: tasteful transitions in Keynote, and suitably legible typography for on-screen quotes.
Soon after that, there was a question-and-answer session with a dance teacher from the English National Ballet. We balanced out the arts with some science by taking a trip to the planetarium, where the dulcet voice of Neil De Grasse Tyson told the tale of dark matter. A malfunctioning projector somewhat tainted the experience, leaving a segment of the dome unilliminated.
It was a full morning of activities, but after lunch, there was just one time and place that mattered: sign ups for the week’s ballet workshops would take place at 3pm on deck two. We wandered by at 2pm, and there was already a line! Jessica quickly took her place in the queue, hoping that she’d make into the workshops, which have a capacity of just 30 people. The line continued to grow. The Cunard staff were clearly not prepared for the level of interest in these ballet workshops. They quickly introduced some emergency measures: this line would only be for the next two day’s workshops, rather than the whole week. So there’d be more queueing later in the week for anyone looking to take more than one workshop.
Anyway, the most important outcome was that Jessica did manage to sign up for a workshop. After all that standing in line, Jessica was ready for a nice sit down so we headed to the area designated for crafters and knitters. As Jessica worked on the knitting project she had brought along, we had our first proper social interactions of the voyage, getting to know the other makers. There was much bonding over the shared love of the excellent Ravelry website.
Next up: a pub quiz at sea in a pub at sea. I ordered the flight of craft beers and we put our heads together for twenty quickfire trivia questions. We came third.
After that, we rested up for a while in our room, before donning our glad rags for the evening’s gala dinner. I bought a tuxedo just for this trip, and now it was time to put it into action. Jessica donned a ballgown. We both looked the part for the black-and-white themed evening.
We headed out for pre-dinner drinks in the ballroom, complete with big band. At one entrance, there was a receiving line to meet the captain. Having had enough of queueing for one day, we went in the other entrance. With glasses of sparkling wine in hand, we surveyed our fellow dressed-up guests who were looking in equal measure dashingly cool and slightly uncomfortable.
After some amusing words from the captain, it was time for dinner. Having missed the proper sit-down dinner the evening before, this was our first time finding out what table we had. We were bracing ourselves for an evening of being sociable, chit-chatting with whoever we’ve been seated with. Your table assignment was the same for the whole week, so you’d better get on well with your tablemates. If you’re stuck with a bunch of obnoxious Brexiteers, tough luck; you just have to suck it up. Much like Brexit.
We were shown to our table, which was …a table for two! Oh, the relief! Even better, we were sitting quite close to the table of ballet dancers. From our table, Jessica could creepily stalk them, and observe them behaving just like mere mortals.
We settled in for a thoroughly enjoyable meal. I opted for an array of pale-coloured foods; cullen skink, followed by seared scallops, accompanied by a Chablis Premier Cru. All this while wearing a bow tie, to the sounds of a string quartet. It felt like peak Titanic.
After dinner, we had a nightcap in the elegant Chart Room bar before calling it a night.
We were woken early by the ship’s horn. This wasn’t the seven-short-and-one-long blast that would signal an emergency. This was more like the sustained booming of a foghorn. In fact, it effectively was a foghorn, because we were in fog.
Below us was the undersea mountain range of the Maxwell Fracture Zone. Outside was a thick Atlantic fog. And inside, we were nursing some slightly sore heads from the previous evening’s intake of wine.
But as a nice bonus, we had an extra hour of sleep. As long as the ship is sailing west, the clocks get put back by an hour every night. Slowly but surely, we’ll get on New York time. Sure beats jetlag.
After a slow start, we sautered downstairs for some breakfast and a decent coffee. Then, to blow out the cobwebs, we walked a circuit of the promenade deck, thereby swapping out bed head for deck head.
It was then time for Jessica and I to briefly part ways. She went to watch the ballet dancers in their morning practice. I went to a lecture by Charlie Barclay from the Royal Astronomical Society, and most edifying it was too (I wonder if I can convince him to come down to give a talk at Brighton Astro sometime?).
After the lecture was done, I tracked down Jessica in the theatre, where she was enraptured by the dancers doing their company class. We stayed there as it segued into the dancers doing a dress rehearsal for their upcoming performance. It was fascinating, not least because it was clear that the dancers were having to cope with being on a slightly swaying moving vessel. That got me wondering: has ballet ever been performed on a ship before? For all I know, it might have been a common entertainment back in the golden age of ocean liners.
We slipped out of the dress rehearsal when hunger got the better of us, and we managed to grab a late lunch right before the buffet closed. After that, we decided it was time to check out the dog kennels up on the twelfth deck. There are 24 dogs travelling on the ship. They are all good dogs. We met Dillinger, a good dog on his way to a new life in Vancouver. Poor Dillinger was struggling with the circumstances of the voyage. But it’s better than being in the cargo hold of an airplane.
While we were up there on the top of the ship, we took a walk around the observation deck right above the bridge. The wind made that quite a tricky perambulation.
The rest of our day was quite relaxed. We did the pub quiz again. We got exactly the same score as we did the day before. We had a nice dinner, although this time a tuxedo was not required (but a jacket still was). Lamb for me; beef for Jessica; a bottle of Gigondas for both of us.
After dinner, we retired to our room, putting our clocks and watches back an hour before climbing into bed.
After a good night’s sleep, we were sauntering towards breakfast when a ship’s announcement was made. This is unusual. Ship’s announcements usually happen at noon, when the captain gives us an update on the journey and our position.
This announcement was dance-related. Contradicting the listed 5pm time, sign-ups for the next ballet workshops would be happening at 9am …which was in 10 minutes time. Registration was on deck two. There we were, examining the breakfast options on deck seven. Cue a frantic rush down the stairwells and across the ship, not helped by me confusing our relative position to fore and aft. But we made it. Jessica got in line, and she was able to register for the workshop she wanted. Crisis averted.
We made our way back up to breakfast, and our daily dose of decent coffee. Then it was time for a lecture that was equally fascinating for me and Jessica. It was Physics En Pointe by Dr. Merritt Moore, ballet dancer and quantum physicist. This was a scene-setting talk, with her describing her life’s journey so far. She’ll be giving more talks throughout the voyage, so I’m hoping for some juicy tales of quantum entanglement (she works in quantum optics, generating entangled photons).
After that, it was time for Jessica’s first workshop. It was a general ballet technique workshop, and they weren’t messing around. I sat off to the side, with a view out on the middle of the Atlantic ocean, tinkering with some code for The Session, while Jessica and the other students were put through their paces.
Then it was time to briefly part ways again. While Jessica went to watch the ballet dancers doing their company class, I was once again attending a lecture by Charles Barclay of the Royal Astronomical Society. This time it was archaeoastronomy …or maybe it was astroarcheology. Either way, it was about how astronomical knowledge was passed on in pre-writing cultures, with a particular emphasis on neolithic sites like Avebury.
When the lecture was done, I rejoined Jessica and we watched the dancers finish their company class. Then it was time for lunch. We ate from the buffet, but deliberately avoided the heavier items, opting for a relatively light salad and sushi combo. This good deed would later be completely undone with a late afternoon cake snack.
We went to one more lecture. Three in one day! It really is like being at a conference. This one, by John Cooper, was on the Elizabethan settlers of Roanoke Island. So in one day, I managed to get a dose of history, science, and culture.
With the day’s workshops and lectures done, it was once again time to put on our best garb for the evening’s gala dinner. All tux’d up, I escorted Jessica downstairs. Tonight was the premier of the ballet performance. But before that, we wandered around drinking champagne and looking fabulous. I even sat at an otherwise empty blackjack table and promptly lost some money. I was a rubbish gambler, but—and this is important—I was a rubbish gambler wearing a tuxedo.
We got good seats for the ballet and settled in for an hour’s entertainment. There were six pieces, mostly classical. Some Swan Lake, some Nutcracker, and some Le Corsaire. But there was also something more modern in there—a magnificent performance from Akram Khan’s Dust. We had been to see Dust at Sadlers Wells, but I had forgotten quite how powerful it is.
After the performance, we had a quick cocktail, and then dinner. The sommelier is getting chattier and chattier with us each evening. I think he approves of our wine choices. This time, we left the vineyards of France, opting for a Pinot Noir from Central Otago.
After one or two nightcaps, we went back to our cabin and before crashing out, we set our clocks back an hour.
We woke to another foggy morning. The Queen Mary 2 was now sailing through the shallower waters of the Grand Banks of Newfoundland. Closer and closer to North America.
This would be my fifth day with virtually no internet access. I could buy WiFi internet access at exorbitant satellite prices, but I hadn’t felt any need to do that. I could also get a maritime mobile phone signal—very slow and very expensive.
I’ve been keeping my phone in airplane mode. Once a day, I connect to the mobile network and check just one website— thesession.org—just to make sure nothing’s on fire there. Fortunately, because I made the site, I know that the data transfer will be minimal. Each page of HTML is between 30K and 90K. There are no images to speak of. And because I’ve got the site’s service worker installed on my phone, I know that CSS and JavaScript is coming straight from a cache.
I’m not missing Twitter. I’m certainly not missing email. The only thing that took some getting used to was not being able to look things up. On the first few days of the crossing, both Jessica and I found ourselves reaching for our phones to look up something about ships or ballet or history …only to remember that we were enveloped in a fog of analogue ignorance, with no sign of terra firma digitalis.
It makes the daily quiz quite challenging. Every morning, twenty questions are listed on sheets of paper that appear at the entrance to the library. This library, by the way, is the largest at sea. As Jessica noted, you can tell a lot about the on-board priorities when the ship’s library is larger than the ship’s casino.
Answers to the quiz are to be handed in by 4pm. In the event of a tie, the team who hands in their answers earliest wins. You’re not supposed to use the internet, but you are positively encouraged to look up answers in the library. Jessica and I have been enjoying this old-fashioned investigative challenge.
With breakfast done before 9am, we had a good hour to spend in the library researching answers to the day’s quiz before Jessica needed to be at her 10am ballet workshop. Jessica got started with the research, but I quickly nipped downstairs to grab a couple of tickets for the planetarium show later that day.
Tickets for the planetarium shows are released every morning at 9am. I sauntered downstairs and arrived at the designated ticket-release location a few minutes before nine, where I waited for someone to put the tickets out. When no tickets appeared five minutes after nine, I wasn’t too worried. But when there were still no tickets at ten past nine, I grew concerned. By quarter past nine, I was getting a bit miffed. Had someone forgotten their planetarium ticket duties?
I found a crewmember at a nearby desk and asked if anyone was going to put out planetarium tickets. No, I was told. The tickets all went shortly after 9am. But I’ve been here since before 9am, I said! Then it dawned on me. The ship’s clocks didn’t go back last night after all. We just assumed they did, and dutifully changed our watches and phones accordingly.
Oh, crap—Jessica’s workshop! I raced back up five decks to the library where Jessica was perusing reference books at her leisure. I told her the bad news. We dashed down to the workshop ballroom anyway, but of course the class was now well underway. After all the frantic dashing and patient queueing that Jessica did yesterday to scure her place on the workshop! Our plans for the day were undone by our being too habitual with our timepieces. No ballet workshop. No planetarium show. I felt like such an idiot.
Well, we still had a full day of activities. There was a talk with ballet dancer, James Streeter (during which we found out that the captain had deployed all the ships stabilisers during the previous evening’s performance). We once again watched the ballet dancers doing their company class for an hour and a half. We went for afternoon tea, complete with string quartet and beautiful view out on the ocean, now mercifully free of fog.
We attended another astronomy lecture, this time on eclipses. But right before the lecture was about to begin, there was a ship-wide announcement. It wasn’t midday, so this had to be something unusual. The captain informed us that a passenger was seriously ill, and the Canadian coastguard was going to attempt a rescue. The ship was diverting closer to Newfoundland to get in helicopter range. The helicopter wouldn’t be landing, but instead attempting a tricky airlift in about twenty minutes time. And so we were told to literally clear the decks. I assume the rescue was successful, and I hope the patient recovers.
After that exciting interlude, things returned to normal. The lecture on eclipses was great, focusing in particular on the magificent 2017 solar eclipse across America.
It’s funny—Jessica and I are on this crossing because it was a fortunate convergence of ballet and being on a ship. And in 2017 we were in Sun Valley, Idaho because of a fortunate convergence of ballet and experiencing a total eclipse of the sun.
I’m starting to sense a theme here.
Anyway, after all the day’s dancing and talks were done, we sat down to dinner, where Jessica could once again surreptitiously spy on the dancers at a nearby table. We cemented our bond with the sommelier by ordering a bottle of the excellent Lebanese Château Musar.
When we got back to our room, there was a note waiting for us. It was an invitation for Jessica to take part in the next day’s ballet workshop! And, looking at the schedule for the next day, there was going to be repeats of the planetarium shows we missed today. All’s well that ends well.
Before going to bed, we did not set our clocks back.
This morning was balletastic:
The workshop was quite something. Jennie Harrington—who retired from dancing with Dust—took the 30 or so attendees through some of the moves from Akram Khan’s masterpiece. It looked great!
While all this was happening inside the ship, the weather outside was warming up. As we travel further south, the atmosphere is getting balmier. I spent an hour out on a deckchair, dozing and reading.
At one point, a large aircraft buzzed us—the Canadian coastguard perhaps? We can’t be that far from land. I think we’re still in international waters, but these waters have a Canadian accent.
After soaking up the salty sea air out on the bright deck, I entered the darkness of the planetarium, having successfully obtained tickets that morning by not having my watch on a different time to the rest of the ship.
That evening, there was a gala dinner with a 1920s theme. Jessica really looked the part—like a real flapper. I didn’t really make an effort. I just wore my tuxedo again. It was really fun wandering the ship and seeing all the ornate outfits, especially during the big band dance after dinner. I felt like I was in a photo on the wall of the Overlook Hotel.
Today was the last full day of the voyage. Tomorrow we disembark.
We had a relaxed day, with the usual activities: a lecture or two; sitting in on the ballet company class.
Instead of getting a buffet lunch, we decided to do a sit-down lunch in the restaurant. That meant sitting at a table with other people, which could’ve been awkward, but turned out to be fine. But now that we’ve done the small talk, that’s probably all our social capital used up.
The main event today was always going to be the reprise and final performance from the English National Ballet. It was an afternoon performance this time. It was as good, if not better, the second time around. Bravo!
Best of all, after the performance, Jessica got to meet James Streeter and Erina Takahashi. Their performance from Dust was amazing, and we gushed with praise. They were very gracious and generous with their time. Needless to say, Jessica was very, very happy.
Shortly before the ballet performance, the captain made another unscheduled announcement. This time it was about a mechanical issue. There was a potential fault that needed to be investigated, which required stopping the ship for a while. Good news for the ballet dancers!
Jessica and I spent some time out on the deck while the ship was stopped. It’s was a lot warmer out there compared to just a day or two before. It was quite humid too—that’ll help us start to acclimatise for New York.
We could tell that we were getting closer to land. There are more ships on the horizon. From the amount of tankers we saw today, the ship must have passed close to a shipping lane.
We’re going to have a very early start tomorrow—although luckily the clocks will go back an hour again. So we did as much of our re-packing as we could this evening.
With the packing done, we still had some time to kill before dinner. We wandered over to the swanky Commodore Club cocktail bar at the fore of the ship. Our timing was perfect. There were two free seats positioned right by a window looking out onto the beautiful sunset we were sailing towards. The combination of ocean waves, gorgeous sunset, and very nice drinks ensured we were very relaxed when we made our way down to dinner.
At the entrance of the dining hall—and at the entrance of any food-bearing establishment on board—there are automatic hand sanitiser dispensers. And just in case the automated solution isn’t enough, there’s also a person standing there with a bottle of hand sanitiser, catching your eye and just daring you to refuse an anti-bacterial benediction. As the line of smartly dressed guests enters the restaurant, this dutiful dispenser of cleanliness anoints the hands of each one; a priest of hygiene delivering a slightly sticky sacrament.
The paranoia is justified. A ship is a potential petri dish at sea. In my hometown of Cobh in Ireland, the old cemetery is filled with the bodies of foreign sailors whose ships were quarantined in the harbour at the first sign of cholera or smallpox. While those diseases aren’t likely to show up on the Queen Mary 2, if norovirus were to break out on the ship, it could potentially spread quickly. Hence the war on hand-based microbes.
Maybe it’s because I’ve just finished reading Ed Yong’s excellent book I contain multitudes, but I can’t help but wonder about our microbiomes on board this ship. Given enough time, would the microbiomes of the passengers begin to sync up? Maybe on a longer voyage, but this crossing almost certainly doesn’t afford enough time for gut synchronisation. This crossing is almost done.
Jessica and I got up at 4:15am. This is an extremely unusual occurance for us. But we were about to experience something very out of the ordinary.
We dressed, looked unsuccessfully for coffee, and made our way on to the observation deck at the top of the ship. Land ho! The lights of New Jersey were shining off the port side of the ship. The lights of long island were shining off the starboard side. And dead ahead was the string of lights marking the Verrazano-Narrows Bridge.
The Queen Mary 2 was deliberately designed to pass under this bridge …just. The bridge has a clearance of 228 feet. The Queen Mary 2 is 236.2 feet, keel to funnel. That’s a difference of just 8.2 feet. Believe me, that doesn’t look like much when you’re on the top deck of the ship, standing right by the tallest mast.
The distant glow of New York was matched by the more localised glow of mobile phone screens on the deck. Passengers took photos constantly. Sometimes they took photos with flash, demonstrating a fundamental misunderstanding of how you photograph distant objects.
The distant object that everyone was taking pictures of was getting less and less distant. The Statue of Liberty was coming up on our port side.
I probably should’ve felt more of a stirring at the sight of this iconic harbour sculpture. The familiarity of its image might have dulled my appreciation. But not far from the statue was a dark area, one of the few pieces of land without lights. This was Ellis Island. If the Statue of Liberty was a symbol of welcome for your tired, your poor, your huddled masses yearning to breathe free, then Ellis Island was where the immigration rubber met the administrative road. This was where countless Irish migrants first entered the United States of America, bringing with them their songs, their stories, and their unhealthy appreciation for potatoes.
Before long, the sun was rising and the Queen Mary 2 was parallel parking at the Red Hook terminal in Brooklyn. We went back belowdecks and gathered our bags from our room. Rather than avail of baggage assistance—which would require us to wait a few hours before disembarking—we opted for “self help” dismembarkation. Shortly after 7am, our time on board the Queen Mary 2 was at an end. We were in the first group of passengers off the ship, and we sailed through customs and immigration.
Within moments of being back on dry land, we were in a cab heading for our hotel in Tribeca. The cab driver took us over the Brooklyn Bridge, explaining along the way how a cash payment would really be better for everyone in this arrangement. I didn’t have many American dollars, but after a bit of currency haggling, we agreed that I could give him the last of the Canadian dollars I had in my wallet from my recent trip to Vancouver. He’s got family in Canada, so this is a win-win situation.
It being a Sunday morning, there was no traffic to speak of. We were at our hotel in no time. I assumed we wouldn’t be able to check in for hours, but at least we’d be able to leave our bags there. I was pleasantly surprised when I was told that they had a room available! We checked in, dropped our bags, and promptly went in search of coffee and breakfast. We were tired, sure, but we had no jetlag. That felt good.
I connected to the hotel’s WiFi and went online for the first time in eight days. I had a lot of spam to delete, mostly about cryptocurrencies. I was back in the 21st century.
After a week at sea, where the empty horizon was visible in all directions, I was now in a teeming mass of human habitation where distant horizons are rare indeed. After New York, I’ll be heading to Saint Augustine in Florida, then Chicago, and finally Boston. My arrival into Manhattan marks the beginning of this two week American odyssey. But this also marks the end of my voyage from Southampton to New York, and with it, this passenger’s log.
Admiring @KelliAnderson’s graphic design work in @LoxPopuli.
Revisiting the carrier hotel at 32 Avenue Of The Americas.
https://adactio.com/journal/11989
Telephone Wires and Radio Unite to Make Neighbors of Nations
Checked in at Russ & Daughters Café. with Jessica
Picked herring, pastrami-cured salmon, and smoked sable.
Happy 28th anniversary of the release of the World Wide Web’s codebase, made public by @TimBerners_Lee on August 19th, 1991:
https://groups.google.com/forum/#!topic/comp.sys.next.announce/avWAjISncfw
Had a thoroughly lovely evening in @DeadRabbitNYC catching up with @RobWeychert and listening to tunes from @FidDylanFoley.
Checked in at The Roxy Hotel. with Jessica
Here’s @Wordridden arriving into New York this morning, as the Queen Mary 2 sailed under the Verrazano-Narrows Bridge.
(Photobombing by @JameStreeter and @ErinaTakahashi_.)
Sailing into the sunset.
Dressed for the 1920s.
Afternoon tea.
@Wordridden on the ocean.
Watching the waves.
Ship life.
@Wordridden on the Queen Mary 2.
Ship shadows.
Ship bits.
The ship and the sky.
Cunard red.
On the deck of the Queen Mary 2.
Jessica on deck.
Setting sail.
Checked in at The Roxy Hotel. Hello, New York! — with Jessica
Going to cross the Atlantic Ocean. brb
Reading Rosewater by Tade Thompson.
Any chance of providing a direct link to the audio file on that page?
Or a podcast link? (an RSS feed with audio enclosures.)
I hate to be “that guy”, but a Soundcloud embed isn’t a podcast.
I’m going to America. But this time it’s going to be a bit different.
Here’s the backstory: I need to get to Chicago for An Event Apart in a couple of weeks. Jessica and I were talking about maybe going to Florida first to hang out with her family on the beach for a bit. We just needed to figure out the travel logistics.
Here’s the next variable to add in to the mix: Jessica is really into ballet. Like, really into ballet. She also likes boats, ships, and all things nautical.
Those two things are normally unrelated, but then a while back, Jessica tweeted this:
OMG @ENBallet on a SHIP crossing the ATLANTIC.
I chuckled at that, and almost immediately dismissed it as being something from another world. But then I looked at the dates, and wouldn’t you know it, it would work out perfectly for our planned travel to Florida and Chicago.
Sooo… we’re crossing the Atlantic ocean on the Queen Mary II. With ballet dancers.
It’s not a cruise. It’s a crossing:
The first rule about traveling between America and England aboard the Queen Mary 2, the flagship of the Cunard Line and the world’s largest ocean liner, is to never refer to your adventure as a cruise. You are, it is understood, making a crossing. The second rule is to refrain, when speaking to those who travel frequently on Cunard’s ships, from calling them regulars. The term of art — it is best pronounced while approximating Maggie Smith’s cut-glass accent on “Downton Abbey” — is Cunardists.
Because of the black-tie gala dinners taking place during the voyage, I am now the owner of tuxedo. I think all this dressing up is kind of like cosplay for the class system. This should be …interesting.
By all accounts, internet connectivity is non-existent on the crossing, so I’m going to be incommunicado. Don’t bother sending me any email—I won’t see it.
We sail from Southampton tomorrow. We arrive in New York a week later.
See you on the other side!
Myself and Jessica joining in some reels and jigs.
Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:
While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.
Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.
Harry’s proposed solution is to set up some Server Timing headers:
With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.
I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:
The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.
He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).
I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.
Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.
Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.
I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.
And now thesession.org is positively zipping along!
Feeling cute, might delete later.
There’s no sugar-coating it—AMP components are dreadfully inaccessible:
We’ve reached a point where AMP may “solve” the web’s performance issues by supercharging the web’s accessibility problem, excluding even more people from accessing the content they deserve.
Hidde gives an in-depth explanation of the Accessibility Object Model, coming soon to browsers near you:
In a way, that’s a bit like what Service Workers do for the network and Houdini for style: give developers control over something that was previously done only by the browser.
You, yes you, should come to Indie Web Camp Brighton on October 19th and 20th:
https://adactio.com/journal/15612
You can register now:
https://ti.to/adactio/indie-webcamp-brighton-2019
See you there!
This looks like a sensible approach to creating a modular architecture for a complex client-side JavaScript codebase.
I know a lot of people swear by ES6 imports, but this systems worked really well for us. It gave us a simple, modular, extensible framework we can easily build on in the future.
The good folks at Sparkbox ran a survey on design systems. Here are the results, presented in a flagrantly anti-Tufte manner.
Remy has an excellent improvement on that article I linked to yesterday on using srcdoc
with iframes. Rather than using srcdoc
instead of src
, you can use srcdoc
as well as src
. That way you can support older browsers too!
Harry takes a deep dive into the performance metric of “time to first byte”, or TTFB if you using initialisms that take as long to say as the thing they’re abbreviating.
This makes a great companion piece to Drew’s article on server timing headers.
Back at the end of May, I wrote:
We’re going to have an Indie Web Camp in Brighton on October 19th and 20th. I realise that’s quite a way off, but I’m giving you plenty of advance warning so you can block out that weekend (and plan travel if you’re coming from outside Brighton).
I hope you’ve got those dates marked in your calendar. Now it’s time for the next step: register for the event. Registration is free, but we need to know numbers in advance, so if you’re planning to come, please grab yourself a ticket there.
It’s going to be a lot of fun!
If you’ve never been to an Indie Web Camp before, you should definitely come! It’s indescribably fun and inspiring. The first day—Saturday—is a BarCamp-style day of discussions to really get the ideas flowing. Then the second day—Sunday—is all about designing, building, and making. The whole thing wraps up with demos.
Check out the previous Brighton Indie Web Camps:
See you at 68 Middle Street on Saturday, October 19th for Indie Web Camp Brighton 2019!
This is a clever use of the srcdoc
attribute on iframes.
Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.
The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).
I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.
At the time, I wrote:
So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.
I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.
I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.
The title is somewhat misleading—currently it’s about native lazy-loading for Chrome, which is not (yet) the web.
I’ve just been adding loading="lazy"
to most of the iframes and many of the images on adactio.com, and it’s working a treat …in Chrome.
At Homebrew Website Club Brighton, @orbific has sent me down a rabbit hole of nostalgia by pointing out that http://brightonbloggers.com/ is still online!
I recently wrote about a web-specific example of a general principle for choosing the right tool for the job:
JavaScript should only do what only JavaScript can do.
I was—yet again—talking about appropriateness. Use the right technology for the task at hand. Here’s the example I gave:
It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering.
Surprisingly, I got some pushback on this. Šime Vidas wrote:
Based on my experience, this is not necessarily the case.
Going from server-side rendering and progressive enhancement via JS to a single-page app powered by a JS framework was a enormous reduction in complexity for me (so the opposite of over-engineering).
(Emphasis mine.) He goes on to say:
My main concerns are ease of use & maintainability. If you get those things right, you’re good and it’s not over-engineering.
There’s no doubt that maintainability is a desirable goal. And ease of use for the developer is also important …but I think they pale in comparison to ease of use for the end user.
To be fair, the specific use-case I mentioned was making a blog. And a blog is a personal thing. You can do whatever the heck you like on your own website and don’t let anyone tell you otherwise.
So I probably chose a poor example to illustrate my point. I was thinking more about when you’re making websites for a living. You’re being paid money to make something available on the web. In that situation, I strongly believe that user needs should win out over developer convenience.
As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time.
That’s why I responded to Šime, saying:
Your main concern should be user needs—not your own.
When I talk about over-engineering, I’m speaking from the perspective of end users, not developers.
Before considering your ease of use, and maintainability, consider users first.
In fairness to Šime, he’s being very open and honest about his priorities. I admire that. I’ve seen too many developers try to provide user experience justifications for decisions made for developer convenience. Once again I recommend Alex’s excellent article, The “Developer Experience” Bait-and-Switch:
The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.
Now I worry I wasn’t specific enough when I talked about choosing appropriate technology:
Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand.
I should have made it clear that I was talking about what is appropriate or inapropriate for users. I think I made the mistake of assuming that this was obvious, and didn’t need saying. I’ll try not to make that mistake again.
There’s a whole group of tools where this point isn’t even relevant—build tools, task runners, version control …anything that never directly touches the end user; use whatever works for you. But if you’re making decisions around HTML, ARIA, CSS, or JavaScript, then appropriateness for the end user should take precedence.
If you’re in that situation—you are being paid money to make websites, and you are making technology decisions—I urge you to remember Charlie’s words: it isn’t about you.
👏
Checked in at Jolly Brewer. Wednesday night session — with Jessica
Boolean logic manifested in a Turing-complete game
Yeah, that’s something completely different that also uses the word “preload”, but it’s nothing to do with navigation preloads in service workers.
That’s not at all what a navigation preload is.
Could you explain what you mean? Using navigation preload shouldn’t use any more data (or time) than a regular navigation to a URL if there were no service worker present.
Listening to @rowanpiggott and @rosiehodgson123 sing about bees at the Brighton Folk Club.
Checked in at The Lord Nelson Inn. Brighton Folk Club — with Jessica
Here it is!
Yes, indeed! 6pm at the @Clearleft studio in @68MiddleSt.
Bayesian analysis vs. statistical significance, clearly explained.
Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. Film cameras become rarities, but not cameras. Typewriters disappear, but not typing. And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.
The lunar landing was not a scientific announcement or a political press conference; it was a performance, a literal space opera, a Wagnerian Gesamtkunstwerk that brought together the efforts of more than 400,000 people, performed before an audience of some 650 million. It was a victory, as Armstrong immediately recognized, not of Western democratic capitalism over Soviet tyranny, or of America over the rest of the world, but for humanity. It belongs to the United States no more than Michelangelo does to Italy or Machu Picchu to Peru.
This is about designing forms that everyone can use and complete as quickly as possible. Because nobody actually wants to use your form. They just want the outcome of having used it.
Can’t wait for this album to drop.
I was chatting with Rachel at work the other day about conversational forms, and I mentioned that I kicked that whole thing off with the mad libs style form on Huffduffer. Here’s the research that Luke later did on whether this style of form could increase conversion.
Books in the public domain, lovingly designed and typeset, available in multiple formats for free. Great works of fiction from Austen, Conrad, Stevenson, Wells, Hardy, Doyle, and Dickens, along with classics of non-fiction like Darwin’s The Origin of Species and Shackleton’s South!
Seamful design involves deliberately revealing seams to users, and taking advantage of features usually considered as negative or problematic.
There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.
Here’s the problem it solves…
If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.
It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.
Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.
To enable navigation preloads, call the enable()
method on registration.navigationPreload
during the activate
event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload
exists in this browser:
if (registration.navigationPreload) {
addEventListener('activate', activateEvent => {
activateEvent.waitUntil(
registration.navigationPreload.enable()
);
});
}
If you’ve already got event listeners on the activate
event, that’s absolutely fine: addEventListener
isn’t exclusive—you can use it to assign multiple tasks to the same event.
Now you need to make use of navigation preloads when you’re responding to fetch
events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.
Let’s say your current strategy for handling page requests looks like this:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
// maybe cache this response for later here.
return responseFromFetch;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
});
That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.
It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch
request. Otherwise you’re making two requests for the same file.
To find out if a preload request is underway, you can check for the existence of the preloadResponse
promise, which will be made available as a property of the fetch event you’re handling:
fetchEvent.preloadResponse
If that exists, you’ll want to use it instead of fetch(request)
.
if (fetchEvent.preloadResponse) {
// do something with fetchEvent.preloadResponse
} else {
// do something with fetch(request)
}
You could structure your code like this:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
if (fetchEvent.preloadResponse) {
fetchEvent.respondWith(
fetchEvent.preloadResponse
.then( responseFromPreload => {
// maybe cache this response for later here.
return responseFromPreload;
})
.catch( preloadError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
} else {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
// maybe cache this response for later here.
return responseFromFetch;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
}
});
But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request)
or from fetchEvent.preloadResponse
. It would be better if you could minimise the amount of duplication.
One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve
. If a preload is underway, we’ll assign it to that variable:
let retrieve;
if (fetchEvent.preloadResponse) {
retrieve = fetchEvent.preloadResponse;
}
If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve
variable:
let retrieve;
if (fetchEvent.preloadResponse) {
retrieve = fetchEvent.preloadResponse;
} else {
retrieve = fetch(request);
}
If you like, you can squash that into a ternary operator:
const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
Use whichever syntax you find more readable.
Now you can apply the same logic, regardless of whether retrieve
is a preload navigation or a fetch request:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
fetchEvent.respondWith(
retrieve
.then( responseFromRetrieve => {
// maybe cache this response for later here.
return responseFromRetrieve;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
});
I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.
Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.
Jeff Posnick made this point in his write-up of bringing service workers to Google search:
Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.
Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!
By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.
Yes, please! I’d much rather link to your writing on your own website.
Also: writing on your own website is hella fun!
In elevating frontend to the land of Serious Code we have not just made things incredibly over-engineered but we have also set fire to all the ladders that we used to get up here in the first place.
I love React. I love how server side rendering React apps is trivial because it all compiles down to vanilla HTML rather than web components, effectively turning it into a kickass template engine that can come alive. I love the way you can very effectively still do progressive enhancement by using completely semantic markup and then letting hydration do more to it.
I also hate React. I hate React because these behaviours are not defaults. React is not gonna warn you if you make a form using divs and unlabelled textboxes and send the whole thing to a server. I hate React because CSS-in-JS approaches by default encourage you to write completely self contained one off components rather than trying to build a website UI up as a whole. I hate the way server side rendering and progressive enhancement are not defaults, but rather things you have to go out of your way to do.
An absolutely brilliant post by Laura on how the priorites baked into JavaScript tools like React are really out of whack. They’ll make sure your behind-the-scenes code is super clean, but not give a rat’s ass for the quality of the output that users have to interact with.
And if you want to adjust the front-end code, you’ve got to set up all this tooling just to change a div
to a button
. That’s quite a barrier to entry.
In elevating frontend to the land of Serious Code we have not just made things incredibly over-engineered but we have also set fire to all the ladders that we used to get up here in the first place.
AMEN!
I love React because it lets me do my best work faster and more easily. I hate React because the culture around it more than the library itself actively prevents other people from doing their best work.
This’ll be good—the inside story of the marvelous Zooniverse project as told by Chris Lintott. I’m looking forward to getting my hands on a copy of this book when it comes out in a couple of months.
Chris broke both his arms just to avoid speaking at the JAMstack conference in London. Seems a bit extreme to me.
Anyway, to make up for not being there, he made a website of his talk. It’s good stuff, tackling the split.
It’s cool to see the tech around our job evolve to the point that we can reach our arms around the whole thing. It’s worthy of some concern when we feel like complication of web technology feels like it’s raising the barrier to entry