Tags: sea

209

sparkline

Tuesday, September 3rd, 2019

Mark Zuckerberg Is a Slumlord

An interesting comparison between Facebook and tenements. Cram everybody together into one social network and the online equivalents of cholera and typhoid soon emerge.

The airless, lightless confines of these networks has a worrying tendency to amplify the most extreme content that takes root, namely that of racists, xenophobes, and conspiracists (which, ironically, includes anti-vaxxers.)

Tuesday, August 27th, 2019

Making Research Count by Cyd Harrell

The brilliant Cyd Harrell is opening up day two of An Event Apart in Chicago. I’m going to attempt to liveblog her talk on making research count…

Research gets done …and then sits in a report, gathering dust.

Research matters. But how do we make it count? We need allies. Maybe we need more money. Perhaps we need more participation from people not on the product team.

If you’re doing real research on a schedule, sharing it on a regular basis, making people’s eyes light up …then you’ve won!

Research counts when it answers questions that people care about. But you probably don’t want to directly ask “Hey, what questions do you want answered?”

Research can explain oddities in analytics weird feedback from customers, unexpected uses of products, and strange hunches (not just your own).

Curious people with power are the most useful ones to influence. Not just hierarchical power. Engineers often have a lot of power. So ask, “Who is the most curious engineer, and how can I drag them out on a research session with me?”

At 18F, Cyd found that a lot of the nodes of power were in the mid level of the organisation who had been there a while—they know a lot of people up and down the chain. If you can get one of those people excited about research, they can spread it.

Open up your practice. Demystify it. Put as much effort into communicating as into practicing. Create opportunities for people to ask questions and learn.

You can think about communities of practice in the obvious way: people who do similar things to us, and other people who make design decisions. But really, everyone in the organisation is affected by design decisions.

Cyd likes to do office hours. People can come by and ask questions. You could open a Slack channel. You can run brown bag lunches to train people in basic user research techniques. In more conventional organisations, a newsletter is a surprisingly effective tool for sharing the latest findings from research. And use your walls to show work in progress.

Research counts when people can see it for themselves—not just when it’s reported from afar. Ask yourself: who in your organisation is disconnected from their user? It’s difficult for people to maintain their motivation in that position.

When someone has been in the field with you, the data doesn’t have to be explained.

Whoever’s curious. Whoever’s disconnected. Invite them along. Show them what you’re doing.

Think about the qualities of a good invitation (for a party, say). Make the rules clear. Make sure they want to come back. Design the experience of observing research. Make sure everyone has tools. Give everyone a responsibility. Be like Willy Wonka—he gave clear rules to the invitied guests. And sure, things didn’t go great when people broke the rules, but at the end, everyone still went home with the truckload of chocolate they were promised.

People who get to ask a question buy in to the results. Those people feel a sense of ownership for the research.

Research counts when methods fit the question. Think about what the right question is and how you might go about answering it.

You can mix your methods. Interviews. Diary studies. Card sorting. Shadowing. You can ground the user research in competitor analysis.

Back in 2008, Cyd was contacted by a company who wanted to know: how do people really use phones in their cars? Cyd’s team would ride along with people, interviewing them, observing them, taking pictures and video.

Later at the federal government, Cyd was asked: what are the best practices for government digital transformation? How to answer that? It’s so broad! Interviews? Who knows what?

They refined the question: what makes modern digital practices stick within a government entity? They looked at what worked when companies were going online, so see if there was anything that government could learn from. Then they created a set of really focused interview questions. What does digital transformation mean? How do you know when you’re done? What are the biggest obstacles to this work? How do you make changes last?

They used atechnique called cluster recruiting to figure out who else to talk to (by asking participants who else they should be talking to).

There is no one research method that will always work for you. Cutting the right corners at the right time lets you be fast and cheap. Cyd’s bare-bones research kit costs about $20: a notebook, a pen, a consent form, and the price of a cup of coffee. She also created a quick score sheet for when she’s not in a position to have research transcribed.

Always label your assumptions before beginning your research. Maybe you’re assuming that something is a frustrating experience that needs fixing, but it might emerge that it doesn’t need fixing—great! You’ve just saved a whole lotta money.

Research counts when researchers tell the story well. Synthesis works best as a conversational practice. It’s hard to do by yourself. You start telling stories when you come back from the field (sometimes it starts when you’re still out in the field, talking about the most interesting observations).

Miller’s Law is a great conceptual framework:

To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of.

You’re probably familiar with the “five whys”. What about the “five ways”? If people talk about something five different ways, it’s virtually certain that one of them will be an apt metaphor. So ask “Can you say that in a different way?” five time.

Spend as much time on communicating outcomes as you did on executing the work.

After research, play back how many people you spoke to, the most valuable insight you gained, the themes that are emerging. Describe the question you wanted to answer, what answers you got, and what you’re going to do next. If you’re in an organisation that values memos, write a memo. Or you could make a video. Or you could write directly into backlog tickets. And don’t forget the wall work! GDS have wonderfully full walls in their research department.

In the end, the best tool for research is an illuminating story.

Cyd was doing research at the Bakersfield courthouse. The hypothesis was that a lot of people weren’t engaging with technology in the court system. She approached a man named Manuel who was positively quaking. He was going through a custody battle. He said, “I don’t know technology but it doesn’t scare me. I’m shaking because this paperwork just gets to me—it’s terrifying.” He said who would gladly pay for someone to help him with the paperwork. Cyd wrote a report on this story. Months later, they heard people in the organisation asking questions like “How would this help Manuel?”

Sometimes you do have to fight (nicely).

People will push back on the time spent on research—they’ll say it doesn’t fit the sprint plan. You can have a three day research plan. Day 1: write scripts. Day 2: go to the users and talk to them. Day 3: play it back. People on a project spend more time than that in Slack.

People will say you can’t talk to the customers. In that situation, you could talk to people who are in the same sector as your company’s customers.

People will question the return on investment for research. Do it cheaply and show the very low costs. Then people stop talking about the money and start talking about the results.

People will claim that qualitative user research is not statistically significant. That’s true. But research is something else. It answers different question.

People will question whether a senior person needs to be involved. It is not fair to ask the intern to do all the work involved in research.

People will say you can’t always do research. But Cyd firmly believes that there’s always room for some research.

  • Make allies in customer research.
  • Find the most curious engineer on the team, go to lunch with them, and feed them the most interesting research insights.
  • Record a pain point and a send a video to executives.
  • If there’s really no budget, maybe you can get away with not paying incentives, but perhaps you can provide some other swag instead.

One of the best things you can do is be there, non-judgementally, making friends. It takes time, but it works. Research is like a dandelion in flight. Once it’s out and about, taking root, the more that research counts.

Monday, August 26th, 2019

Opening up the AMP cache

I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…

The AMP format

Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!

But I cannot get behind AMP.

Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.

The AMP ecosystem

Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:

This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?

See also: Google AMP lowered our page speed, and there’s no choice but to use it:

According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.

Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:

We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.

Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.

The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:

Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

From A letter about Google AMP:

Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.

That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.

The AMP cache

If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!

That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.

If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.

Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.

Here’s that AMP letter again:

When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.

Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.

But why host the pages on a Google domain? Why not prerender the original URLs?

Prerendering and privacy

Scott summed up the situation with AMP nicely:

The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.

But perhaps we could de-couple the AMP format from the AMP cache.

That’s what Terence suggests:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

The AMP letter, too:

Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.

Scott reiterates:

It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.

This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.

Here’s the problem…

Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.

And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.

Still, it’s a real shame to miss out on the speed benefit of prerendering:

Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.

A modest proposal

Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)

What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.

Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:

  1. Hosting needs to be opt-in.
  2. Only fast pages should be prerendered.

Opting in

Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.

This could be a meta element. Maybe something like:

<meta name="caches-allowed" content="google">

This would have the nice benefit of allowing comma-separated values:

<meta name="caches-allowed" content="google, yandex">

(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)

If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.

Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.

Criteria for prerendering

Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.

Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:

  • Page Speed Index
  • Lighthouse
  • Web Page Test

Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.

That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:

Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.

Issues

This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.

In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.

What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.

This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.

Measuring

Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.

I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.

Assets

Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.

If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.

This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.

Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.

Summary

  1. Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
  2. Currently, that’s only done for pages using the AMP format.
  3. The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.

There will be technical challenges, but hopefully nothing insurmountable.

I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).

I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.

I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)

What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?

I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”

I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”

And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.

Google, open up the AMP cache.

Saturday, August 24th, 2019

Plaidophile: So about that AMP-script thing

Reinventing the web the long way around, in a way that gives Google even more control of it. No thanks.

Friday, August 23rd, 2019

Taking shortcuts ・ Robin Rendle

How Robin really feels about Google AMP:

Here’s my hot take on this: fuck the algorithm, fuck the impressions, and fuck the king. I would rather trade those benefits and burn my website to the ground than be under the boot and heel and of some giant, uncaring corporation.

Friday, August 2nd, 2019

Seamful Design and Ubicomp Infrastructure (PDF)

Seams:

Seamful design involves deliberately revealing seams to users, and taking advantage of features usually considered as negative or problematic.

Thursday, August 1st, 2019

The Crowd and the Cosmos - Chris Lintott - Oxford University Press

This’ll be good—the inside story of the marvelous Zooniverse project as told by Chris Lintott. I’m looking forward to getting my hands on a copy of this book when it comes out in a couple of months.

Sunday, June 30th, 2019

Lights at sea

Lighthouses of the world, mapped.

Wednesday, May 29th, 2019

Cake or death: AMP and the worrying power dynamics of the web | Andrew Betts

Andrew looks at AMP from a technical, UX, and commercial perspective. It looks pretty bad in all three areas. And the common thread is the coercion being applied to publishers.

But casting the web aside and pushing a new proprietary content format (which is optional, but see coercion) seems like an extraordinarily heavy handed way to address it. It’s like saying I see you have a graze on your knee so let’s chop off and replace your whole leg. Instead, we could use the carrot of a premium search result position (as AMP has done) and make it only possible to be there if your site is fast.

He’s absolutely right about how it sounds when the AMP team proudly talk about how many publishers are adopting their framework, as if the framework were actually standing on its own merits instead of being used to blackmail publishers:

It is utterly bizarre to me, akin to a street robber that has convinced himself that people just randomly like giving him their money and has managed to forget the fact that he’s holding a gun to their head.

Thursday, May 16th, 2019

Lighthouse | Eric Bailey

What if accessibility were a ranking signal for Google search results?

Here’s a thought: what if Google put its thumb on the scale again, only this time for accessibility? What if it treated the Lighthouse accessibility score as a first-class ranking metric?

Wednesday, May 15th, 2019

A report from the AMP Advisory Committee Meeting – Terence Eden’s Blog

I completely agree with every single one of Terence’s recommendations here. The difference is that, in my case, they’re just hot takes, whereas he has actually joined the AMP Advisory Committee, joined their meetings, and listened to the concerns of actual publishers.

He finds:

  • AMP isn’t loved by publishers
  • AMP is not accessible
  • No user research
  • AMP spreads fake news
  • Signed Exchanges are not the answer

There’s also a very worrying anti-competitive move by Google Search in only showing AMP results to users of Google Chrome.

I’ve been emailing with Paul from the AMP team and I’ve told him that I honestly think that AMP’s goal should be to make itself redundant …the opposite of the direction it’s going in.

As I said in the meeting - if it were up to me, I’d go “Well, AMP was an interesting experiment. Now it is time to shut it down and take the lessons learned back through a proper standards process.”

I suspect that is unlikely to happen. Google shows no sign of dropping AMP. Mind you, I thought that about Google+ and Inbox, so who knows!

Good point!

Friday, April 12th, 2019

Google AMP lowered our page speed, and there’s no choice but to use it - unlike kinds

What happens when you’re AMP pages are slower than your regular pages …but you’re forced to use AMP anyway if you want to appear in the top stories carousel.

AMP isn’t about speed. It’s about control.

The elephant in the room here is pre-rendering: that’s why Google aren’t using page speed alone as a determining factor for what goes in the carousel.

Monday, April 8th, 2019

The Bureau of Suspended Objects

200 discarded objects from a dump in San Francisco, meticulously catalogued, researched, and documented by Jenny Odell. The result is something more revealing than most pre-planned time capsule projects …although this project may be somewhat short-lived as it’s hosted on Tumblr.

Methods - 18F Methods

A very handy collection of design exercises as used by 18F. There’s a lot of crossover here with the Clearleft toolbox.

A collection of tools to bring human-centered design into your project.

These methods are categorised by:

  1. Discover
  2. Decide
  3. Make
  4. Validate
  5. Fundamentals

Friday, March 15th, 2019

Saturday, March 9th, 2019

Handing back control

An Event Apart Seattle was most excellent. This year, the AEA team are trying something different and making each event three days long. That’s a lot of mindblowing content!

What always fascinates me at events like these is the way that some themes seem to emerge, without any prior collusion between the speakers. This time, I felt that there was a strong thread of giving control directly to users:

Sarah and Margot both touched on this when talking about authenticity in brand messaging.

Margot described this in terms of vulnerability for the brand, but the kind of vulnerability that leads to trust.

Sarah talked about it in terms of respect—respecting the privacy of users, and respecting the way that they want to use your services. Call it compassion, call it empathy, or call it just good business sense, but providing these kind of controls in an interface is an excellent long-term strategy.

In Val’s animation talk, she did a deep dive into prefers-reduced-motion, a media query that deliberately hands control back to the user.

Even in a CSS-heavy talk like Jen’s, she took the time to explain why starting with meaningful markup is so important—it’s because you can’t control how the user will access your content. They may use tools like reader modes, or Pocket, or have web pages read aloud to them. The user has the final say, and rightly so.

In his CSS talk, Eric reminded us that a style sheet is a list of strong suggestions, not instructions.

Beth’s talk was probably the most explicit on the theme of returning control to users. She drew on examples from beyond the world of the web—from architecture, urban planning, and more—to show that the most successful systems are not imposed from the top down, but involve everyone, especially those most marginalised.

And even in my own talk on service workers, I raved about the design pattern of allowing users to save pages offline to read later. Instead of trying to guess what the user wants, give them the means to take control.

I was really encouraged to see this theme emerge. Mind you, when I look at the reality of most web products, it’s easy to get discouraged. Far from providing their users with controls over their own content, Instagram won’t even let their customers have a chronological feed. And Matt recently wrote about how both Twitter and Quora are heading further and further away from giving control to their users in his piece called Optimizing for outrage.

Still, I came away from An Event Apart Seattle with a renewed determination to do my part in giving people more control over the products and services we design and develop.

I spent the first two days of the conference trying to liveblog as much as I could. I find it really focuses my attention, although it’s also quite knackering. I didn’t do too badly; I managed to write cover eleven of the talks (out of the conference’s total of seventeen):

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean

Thursday, March 7th, 2019

Going Offline—the talk of the book

I gave a new talk at An Event Apart in Seattle yesterday morning. The talk was called Going Offline, which the eagle-eyed amongst you will recognise as the title of my most recent book, all about service workers.

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I knew from pretty early on that I was going to show—and explain—some code examples. Those were the parts I sweated over the most. I knew I’d be presenting to a mixed audience of designers, developers, and other web professionals. I couldn’t assume too much existing knowledge. At the same time, I didn’t want to teach anyone to such eggs.

In the end, there was an overarching meta-theme to talk, which was this: logic is more important than code. In other words, figuring out what you’re trying to accomplish (and describing it clearly) is more important than typing curly braces and semi-colons. Programming is an act of translation. Before you can translate something, you need to be able to articulate it clearly in your own language first. By emphasising that point, I hoped to make the code less overwhelming to people unfamilar with it.

I had tested the talk with some of my Clearleft colleagues, and they gave me great feedback. But I never know until I’ve actually given a talk in front of a real conference audience whether the talk is any good or not. Now that I’ve given the talk, and received more feedback, I think I can confidentally say that it’s pretty damn good.

My goal was to explain some fairly gnarly concepts—let’s face it: service workers are downright weird, and not the easiest thing to get your head around—and to leave the audience with two feelings:

  1. This is exciting, and
  2. This is something I can do today.

I deliberately left time for questions, bribing people with free copies of my book. I got some great questions, and I may incorporate some of them into future versions of this talk (conference organisers, if this sounds like the kind of talk you’d like at your event, please get in touch). Some of the points brought up in the questions were:

  • Is there some kind of wizard for creating a typical service worker script for any site? I didn’t have a direct answer to this, but I have attempted to make a minimal viable service worker that could be used for just about any site. Mostly I encouraged the questioner to roll their sleeves up and try writing a bespoke script. I also mentioned the Workbox library, but I gave my opinion that if you’re going to spend the time to learn the library, you may as well spend the time to learn the underlying language.
  • What are some state-of-the-art progressive web apps for offline user experiences? Ooh, this one kind of stumped me. I mean, the obvious poster children for progressive webs apps are things like Twitter, Instagram, and Pinterest. They’re all great but the offline experience is somewhat limited. To be honest, I think there’s more potential for great offline experiences by publishers. I especially love the pattern on personal sites like Una’s and Sara’s where people can choose to save articles offline to read later—like a bespoke Instapaper or Pocket. I’d love so see that pattern adopted by some big publications. I particularly like that gives so much more control directly to the end user. Instead of trying to guess what kind of offline experience they want, we give them the tools to craft their own.
  • Do caches get cleaned up automatically? Great question! And the answer is mostly no—although browsers do have their own heuristics about how much space you get to play with. There’s a whole chapter in my book about being a good citizen and cleaning up your caches, but I didn’t include that in the talk because it isn’t exactly exciting: “Hey everyone! Now we’re going to do some housekeeping—yay!”
  • Isn’t there potential for abuse here? This is related to the previous question, and it’s another great question to ask of any technology. In short, yes. Bad actors could use service workers to fill up caches uneccesarily. I’ve written about back door service workers too, although the real problem there is with iframes rather than service workers—iframes and cookies are technologies that are already being abused by bad actors, and we’re going to see more and more interventions by ethical browser makers (like Mozilla) to clamp down on those technologies …just as browsers had to clamp down on the abuse of pop-up windows in the early days of JavaScript. The cache API could become a tragedy of the commons. I liken the situation to regulation: we should self-regulate, but if we prove ourselves incapable of that, then outside regulation (by browsers) will be imposed upon us.
  • What kind of things are in the future for service workers? Excellent question! If you think about it, a service worker is kind of a conduit that gives you access to different APIs: the Cache API and the Fetch API being the main ones now. A service worker is like an airport and the APIs are like the airlines. There are other APIs that you can access through service workers. Notifications are available now on desktop and on Android, and they’ll be coming to iOS soon. Background Sync is another powerful API accessed through service workers that will get more and more browser support over time. The great thing is that you can start using these APIs today even if they aren’t universally supported. Then, over time, more and more of your users will benefit from those enhancements.

If you attended the talk and want to learn more about about service workers, there’s my book (obvs), but I’ve also written lots of blog posts about service workers and I’ve linked to lots of resources too.

Finally, here’s a list of links to all the books, sites, and articles I referenced in my talk…

Books

Sites

Progressive Web Apps

Wednesday, March 6th, 2019

Unsolved Problems by Beth Dean

An Event Apart in Seattle continues. It’s the afternoon of day two and Beth Dean is here to give a talk called Unsolved Problems:

Technology products are being adapted faster than ever. We’ve spent a lot of time adopting new technology, but not as much time considering the social impact of doing so. This talk looks at large scale system design in the offline world, and takes lessons from them to our online work. You’ll learn how to expand your design approach from self-contained products, to considering the broader systems in which they exist.

Fun fact: An Event Apart was the first conference that Beth attended over ten years ago.

Who recognises this guy on screen? It’s Robert Stack, the creepy host of Unsolved Mysteries. It was kind of like the X-Files. The X-Files taught Beth to be a sceptic. Imagine Beth’s surprise when her job at Facebook led her to actual conspiracies. It’s been a hard year, what with Cambridge Analytica and all.

Beth’s team is focused on how people experience ads, while the whole rest of the company is focused on ads from the opposite end. She’s the Fox Mulder of the company.

Technology today has incredible reach. In recent years, we’ve seen 1:1 harm. That’s when a product negatively effects someone directly. In their book, Eric and Sara point out that Facebook is often the first company to solve these problems.

1:many harm is another use of technology. Designing in isolation isn’t new to tech. We’ve seen 1:many harm in urban planning. Brasilia is a beautiful city that nobody wants to live in. You need messy, mixed-use spaces, not a space designed for cars. Niemeier planned for efficiency, not reality.

Eichler buildings were supposed to be egalitarian. But everything that makes these single-story homes great places to live also makes them great targets for criminals. Isolation by intentional design leads to a less safe place to live.

One of Frank Gehry’s buildings turned into a deathtrap when it was covered with snow. And in summer, the reflective material makes it impossible to sit on side of it. His Facebook office building has some “interesting” restroom allocation, which was planned last.

Ohio had a deer overpopulation problem. So the solution they settled on was to introduce coyotes. Now there’s a coyote problem. When coyotes breed with stray dogs, they start to get aggressive and they hunt in packs. This is the cobra effect: when the solution to your problem makes the problem worse. The British government offered a bounty for cobras in India. So people bred snakes for the bounty. So they got rid of the bounty …and then all those snakes were released into the wild.

So-called “ride sharing” apps are about getting one person from point A to point B. They’re not about making getting around easier in general.

Google traffic directions don’t factor in the effect of Google giving everyone the same traffic directions.

AirBnB drives up rent …even though it started out as a way to help people who couldn’t make rent. Sounds like cobra farming.

Automating Inequality by Virgina Eubanks is an excellent book about being dropped by health insurance. An algorithm did it. By taking broken systems and automating them, we accelerate disenfranchisement.

Then there’s Facebook. Psychological warfare is not new. Radio and television have influenced elections long before the internet. Politicians changed their language to fit the medium of radio.

The internet has removed all friction that helps us behave cooperatively. Removing friction was once our goal, but it turns out that friction is sometimes useful. The internet has turned into an outrage machine.

Solving problems in the isolation of our own products ignores the broader context of society.

The Waze map reflects cities as they are, not the way someone wishes them to be.

—Noam Bardin, CEO of Waze

From bulletin boards to today’s web, the internet has always been toxic because human nature is toxic. Maybe that’s the bigger problem to solve.

We can look to other industries…

Ideo redesigned the hospital experience. People were introduced to their entire care staff on their first visit. Sloan Kettering took a similar approach. Artwork serves as wayfinding. Every room has its own bathroom. A Chicago hostpital included gardens because it improves recovery.

These hospital examples all:

  • Designed for an intended outcome.
  • Met people where they were.
  • Strengthened existing support networks.

We’ve seen some bad examples from urban planning, but there are success stories too.

A person on a $30 bicycle is as important as someone in a $30,000 car, said Enrique Peñalosa.

Copenhagen once faced awful traffic congestion. Now people cycle everywhere. It’s the fastest way to get around. The city is designed for bicycles first. People rode more when it felt safer. It’s no coincidence that Copenhagen ranks as one of the most livable cities in the world.

Scandinavian prisons use a concept called restorative justice. The staff plays badminton with the inmates. They cook together. Treat people like dirt and they will act like dirt. Treat people like people and they will act like people. Recividism rates in Norway are now way low.

  • Design for dignity and cooperation.
  • Solve for everyone in a system.
  • Policy should reflect intended outcomes.

The deHavilland Comet was made of metal. After a few blew apart at the seams, they switched from rivetted material. Airlines today develop a culture of crew resource management that encourages people to speak up.

  • Plan for every point of failure.
  • Empower everyone on a team to solve problems.
  • Adapt.

What can we do?

  • Policies affect design. We need to work more closely with policy makers.
  • Question access. Are all opinions equal? Where are computers making decisions that should involve people.
  • Forget neutrality. Technology is not neutral. Neutrality allows us to abdicate responsibility.
  • Stay a litte bit paranoid. Think about what the worst case scenario might be.

Make people better curators. How might we allow people to assess the veracity of information for themselves? What if we gave people better tools to affect their overall experience, not just small customisations?

We can use what we know about people to bring out their best behaviours. We can empower people to take action instead of just outrage.

What if we designed for the good of the community instead of the success of individuals. Like the Vauban in Freiburg! It was squatted, and the city gave control to the squatters to create an eco neighbourhood with affordable housing.

We need to think about what kind of worlds we want to create. What if we made the web less like a mall and more like a public park?

These are hard problems. But we solve hard technology problems every day. We could be the first generation of builders to solve technology’s hard problems.

Tuesday, March 5th, 2019

Mobile Planet by Luke Wroblewski

It’s the afternoon of day two of An Event Apart in Seattle. The mighty green one, Luke Wroblewski, is here to deliver a talk called Mobile Planet:

With 3.5 billion active smartphones on Earth, we’re now faced with the challenges and opportunities of designing planet-scale software. Through a data-informed, big-picture walk-through of our mobile planet, Luke will dig into how people use computing devices today and how the design of our products needs to adapt to this reality. He’ll cover key issues like app on-boarding and performance in enough detail to give you clear ways to improve first time and subsequent use of your mobile apps and sites.

Luke has been working on figuring out hardware and software for years. He looks at a lot of data. The more we understand how people use technology in their daily lives, the better we can design for them.

Earth is the third planet from the sun, and the only place that we know of that harbours life. Our population is at about 7.7 billion people. There are about 5.6 billion people in our addressable market (people over 14 years old). There are already 5 billion mobile subscribers in there. That’s interesting, but which of those devices are modern smartphones? There are about 3.6 billion active smartphones. Compare that to about 1.3 billion active personal computers—the vast majority of them Windows devices (about 1.2 billion). Over the next four or five years, we’ll have about 5 billion smartphone users and a global population of 8 billion.

The point is that we can reach a significant proportion of the human species. The diversity of our species makes it challenging to design for everyone.

Let’s take a closer look at these 3.6 billion active smartphones. About 25% of them are iOS devices. 75% of them are Android. Bear in mind that these are active devices—what’s actually being used. That’s different to shipping devices. Apple ships 15% of smartphone, and Android ships 85%, but the iOS devices tend to have longer lifespans (around 2 years for Android; around 4 years for iOS).

The UK has 82% smartphone penetration. Compare that to India, where it’s 27%. There’s room to grow.

Everywhere you look, the growth of these devices has led to a shift of digital things overtaking analogue. Shopping, advertising, music, you name it. We’ve seen enough of these transitions happen, that we should be prepared for it.

So there are lots of smartphones, with basically two major operating systems. But how are people using these devices?

In the US, adults spend about 2.3 - 3.5 hours per day on their mobile devices. Let’s call it an even 3 hours. That’s a lot of time. Where does that time come from? Interestingly, as time spent on mobile devices has surged, time spent on other media has only slowly declined. So mobile is additive. It’s contributing to more time spent on the internet rather than taking it away from existing screen time.

Next question: what the hell are people doing during those 3 hours per day on smartphones? Native apps get about 169 minutes of time compared to only 11 minutes on the web. There are about 2 million native apps on Apple, and about 2 million native apps on Android. But although people have a lot of apps, people only use about half them. Remember folks, downloads does not equal usage. Most apps don’t make it past the first opening. Only a third make it past being opened ten times.

Because people spend so much time and energy on these apps, and given the abysmal abandonment, people start freaking out about “engagement.” So what do they reach for? Push notifications. Either that or onboarding.

Push notifications. The worst. I mean, they do succeed in getting your attention: push notifications do increase the amount of time spent in your app …but there’s a human cost.

Let’s look at app onboarding. Take Flickr, for example. It walks through some of the features and benefits of the service. But it doesn’t actually help you much. It’s a list of marketing slogans. So why do people reach for onboarding?

If you just drop people into an interface and talk to them about it, they’ll say things like “I don’t know what to do. I’m lost.” The Intuit team heard this from people using their app. They reached for onboarding to solve the problem. They created guided tutorials and intro tours. Turns out that nobody would read these screens and everyone would try to skip them. What the hell, people!?

So they try in-context help, with a cute cartoon robot to explain the features. Or they scribble Einstein’s equations over the interface. Test this. People respond with “Please make it stop.”

They decided to try something simpler: one tip that calls out a good first step. That worked.

Vevo used to have an intro tour. Most people were swiping through without reading. They experimented with not running the tour. They got a 10% increase in log-ins and a 6% increase in sign-ups.

Vevo got rid of their tour, but left the sign-in/registration step. You can’t remove that, right?

Well, Hotel Tonight experimented with not doing registration. Signing up was confusing people—it’s Hotels Tonight, not Accounts Today. When they got rid of accounts, they saw a 15% increase in conversions.

Ruthlessly edit.

Google Photos used to have an in-depth on-boarding experience. First they got rid of the animation. Then the start-up screen. Then the animated tutorial. Each time they removed something, conversion went up. All that was left from the original onboarding was a half screen with one option to turn on auto-backup.

Get to your product value as fast as possible. Of course that requires you to know what your core value is. And that’s not easy to figure out.

Google Maps went through a similar reduction, removing intro screens and explanations. Now they just drop you into the map.

It’s not “get rid of everything”. It’s “get rid of everything that gets in the way of the core user action.”

Going back to the Intuit example, that’s exactly what they did in the end. That one initial tip was for the core action.

But it’s worth discussing how to present this kind of thing. If you have to overlay a tooltip for an important UI feature, maybe that UI feature should have a clearer affordance. People treat overlays as annoyances. People ignore or dismiss overlays when they’re focused on a task. It’s like an instinct to get rid of them. So if you put something useful or valuable there, it’s gone.

The core part of your application should feel like the core part of your application. It’s tough because stakeholders want to make things “pop.” We throw contrast, colour, and animation at things. But when something sticks out from the UI, people ignore it. Integrate the core action into the product UI. When elements feel foreign to a product UI, they are at best ignored, or at worst dismissed.

These is why cohesive design matters. It’s not about consistency. It’s about feeling integrated. In many cases, consistency can be counter-productive.

Some principles for successful onboarding:

  1. Get to to the product value as fast as possible. Grubhub needs your address. Pinterest needs your interests.
  2. Get rid of everything that doesn’t lead to that product value. Ruthlessly edit. Remove all friction that distracts the user from experiencing product value.
  3. Don’t be afraid to educate contextually. But do so with integrated UI.

Luke talked a lot about what’s happening in mobile apps, and mentioned that the mobile web only gets 11 minutes to the native’s 169. But let’s dive into this, because people sometimes think that a “mobile strategy” comes down to picking between these two. 50% of those 169 minutes are spent in your most used app (Facebook). 78% of the time is spent in the top 5 apps. Now the mobile web doesn’t look so bad. It turns out you can get people to a mobile web experience much, much faster than to a native one. The audience size is much, much, much higher on the web (although people will do more in a dedicated native app). So strategically both are useful—the web can attract people to native.

Back to our planet, and those 3 hours of usage on smartphones every day. People unlock their phones around 80 times a day. The average time people sleep is about 8 hours. So for every 12 waking minutes, you’re unlocking your phone. Given this frequency, it’s unsurprising that most sessions are very short—most under 30 seconds.

Given that, if things are slow, you’re going to really, really, really hate it. Waiting for slow pages to load is what really pisses people off.

The cognitive load and stress of waiting for slow pages is worse than waiting in line in a store, or watching a horror movie. That’s an industry that’s all about stressing people out by design! But experiencing mobile delays is more stressful! Probably because people aren’t watching horror movies every 12 minutes.

Because mobile delays are such a big deal, many mobile apps reach for loading spinners. But Luke saw that adding a spinner to his product increased complaints of slow loading times. Of course! The spinner is explicitly telling people, “Hey, we’re slow.”

So the switched to skeleton screens. This should feel like something is always happening. Focus on the progress, not the progress indicator. Occupied time feels shorter than unoccupied time.

A lot of people have implemented skeleton screen, but without the progressive loading. Swapping out a skeleton screen to a completely different UI all at once doesn’t help. The skeleton screens should represent the real content.

This is a lot of work; figuring how to prioritise what to load first. Luke isn’t talking about the techical side here, but the user’s experience. Investing in getting this right makes a lot of sense.

Let’s look a little closer at this number: people interacting with their phones 80 times a day. The average user touches the device 2,617 times a day. A power user touches the device over 5,000 times a day. Most touches are within one app.

90% of the touches are dealing with one thumb. Young people tend to operate with one hand. For older people, it’s more like 60%.

This is why your interface targets need to work for the thumb.

On phones, 90% of the time you’re dealing with portrait mode. Things at the top of the screen on larger devices are hard to reach. Core actions gravitate to the bottom of the screen.

Opera Touch is a new browser designed specifically for one-handed use. The Palm Pre’s WebOS was also about one-hand usage. Now that’s how iOS and Android work: swiping up from the bottom.

So mobile usage is:

  • One-handed/thumb.
  • In portrait mode on large screens.
  • Design accordingly.

What’s next? What do we need to be aware of so we don’t get caught with our pants down?

We can use the product lifecycle chart to figure this out. There’s an emergent phase, then a growth phase, then consolidation in a mature market, and then that gets disrupted and becomes a declining market.

  • Mobile devices—hand computers—are in a mature consolidated market.
  • Desktop and laptop computers are in a declining market.
  • Wrist computers and voice computers are in a growth market.

Small screens get used more frequently, but for shorter periods of time than large screens. Wrist and voice computers are figuring out what their core offerings are.

In the emergent category, it’s all about exploration. We have no idea how things will turn out. We just don’t know. But we do know that we are now designing for lots and lots of different devices.

For today, though, focusing on mobile is still a pretty good idea.

To summarise:

  • It’s a mobile planet.
  • Understanding real world usage helps you design.
  • Prep for what’s next

Move Fast and Don’t Break Things by Scott Jehl

Scott Jehl is speaking at An Event Apart in Seattle—yay! His talk is called Move Fast and Don’t Break Things:

Performance is a high priority for any site of scale today, but it can be easier to make a site fast than to keep it that way. As a site’s features and design evolves, its performance is often threatened for a number of reasons, making it hard to ensure fast, resilient access to services. In this session, Scott will draw from real-world examples where business goals and other priorities have conflicted with page performance, and share some strategies and practices that have helped major sites overcome those challenges to defend their speed without compromises.

The title is a riff on the “move fast and break things” motto, which comes from a more naive time on the web. But Scott finds part of it relatable. Things break. We want to move fast without breaking things.

This is a performance talk, which is another kind of moving fast. Scott starts with a brief history of not breaking websites. He’s been chipping away at websites for 20 years now. Remember Positioning Is Everything? How about Quirksmode? That one's still around.

In the early days, building a website that was "not broken" was difficult, but it was difficult for different reasons. We were focused on consistency. We had deal with differences between browsers. There were two ways of dealing with browsers: browser detection and feature detection.

The feature-based approach was more sustainable but harder. It fits nicely with the practice of progressive enhancement. It's a good mindset for dealing with the explosion of devices that kicked off later. Touch screens made us rethink our mouse and hover-centric matters. That made us realise how much keyboard-driven access mattered all along.

Browsers exploded too. And our data networks changed. With this explosion of considerations, it was clear that our early ideas of “not broken” didn’t work. Our notion of what constituted “not broken” was itself broken. Consistency just doesn’t cut it.

But there was a comforting part to this too. It turned out that progressive enhancement was there to help …even though we didn’t know what new devices were going to appear. This is a recurring theme throughout Scott’s career. So given all these benefits of progressive enhancement, it shouldn’t be surprising that it turns out to be really good for performance too. If you practice progressive enhancement, you’re kind of a performance expert already.

People started talking about new performance metrics that we should care about. We’ve got new tools, like Page Speed Insights. It gives tangible advice on how to test things. Web Page Test is another great tool. Once you prove you’re a human, Web Page Test will give you loads of details on how a page loaded. And you get this great visual timeline.

This is where we can start to discuss the metrics we want to focus on. Traditionally, we focused on file size, which still matters. But for goal-setting, we want to focus on user-perceived metrics.

First Meaningful Content. It’s about how soon appears to be useful to a user. Progressive enhancement is a perfect match for this! When you first make request to a website, it’s usually for a web page. But to render that page, it might need to request more files like CSS or JavaScript. All of this adds up. From a user perspective, if the HTML is downloaded, but the browser can’t render it, that’s broken.

The average time for this on the web right now is around six seconds. That’s broken. The render blockers are the problem here.

Consider assets like scripts. Can you get the browser to load them without holding up the rendering of the page? If you can add async or defer to a script element in the head, you should do that. Sometimes that’s not an option though.

For CSS, it’s tricky. We’ve delivered the HTML that we need but we’ve got to wait for the CSS before rendering it. So what can you bundle into that initial payload?

You can user server push. This is a new technology that comes with HTTP2. H2, as it’s called, is very performance-focused. Just turning on H2 will probably make your site faster. Server push allows the server to send files to the browser before the browser has even asked for them. You can do this with directives in Apache, for example. You could push CSS whenever an HTML file is requested. But we need to be careful not to go too far. You don’t want to send too much.

Server push is great in moderation. But it is new, and it may not even be supported by your server.

Another option is to inline CSS (well, actually Scott, this is technically embedding CSS). It’s great for first render, but isn’t it wasteful for caching? Scott has a clever pattern that uses the Cache API to grab the contents of the inlined CSS and put a copy of its contents into the cache. Then it’s ready to be served up by a service worker.

By the way, this isn’t just for CSS. You could grab the contents of inlined SVGs and create cached versions for later use.

So inlining CSS is good, but again, in moderation. You don’t want to embed anything bigger than 15 or 20 kilobytes. You might want separate out the critical CSS and only embed that on first render. You don’t need to go through your CSS by hand to figure out what’s critical—there are tools that to do this that integrate with your build process. Embed that critical CSS into the head of your document, and also start preloading the full CSS. Here’s a clever technique that turns a preload link into a stylesheet link:

<link rel="preload" href="site.css" as="style" onload="this.rel='stylesheet'">

Also include this:

<noscript><link rel="stylesheet" href="site.css"></noscript>

You can also optimise for return visits. It’s all about the cache.

In the past, we might’ve used a cookie to distinguish a returning visitor from a first-time visitor. But cookies kind of suck. Here’s something that Scott has been thinking about: service workers can intercept outgoing requests. A service worker could send a header that matches the current build of CSS. On the server, we can check for this header. If it’s not the latest CSS, we can server push the latest version, or inline it.

The neat thing about service workers is that they have to install before they take over. Scott makes use of this install event to put your important assets into a cache. Only once that is done to we start adding that extra header to requests.

Watch out for an article on the Filament Group blog on this technique!

With performance, more weight doesn’t have to mean more wait. You can have a heavy page that still appears to load quickly by altering the prioritisation of what loads first.

Web pages are very heavy now. There’s a real cost to every byte. Tim’s WhatDoesMySiteCost.com shows that the CNN home page costs almost fifty cents to load for someone in America!

Time to interactive. This is is the time before a user can use what’s on the screen. The issue is almost always with JavaScript. The page looks usable, but you can’t use it yet.

Addy Osmani suggests we should get to interactive in under five seconds on a 3G network on a median mobile device. Your iPhone is not a median mobile device. A typical phone takes six seconds to process a megabyte of JavaScript after it has downloaded. So even if the network is fast, the time to interactive can still be very long.

This all comes down to our industry’s increasing reliance on JavaScript just to render content. There seems to be pendulum shifts between client-side and server-side rendering. It’s been great to see libraries like Vue and Ember embrace server-side rendering.

But even with server-side rendering, there’s still usually a rehydration step where all the JavaScript gets parsed and that really affects time to interaction.

Code splitting can help. Webpack can do this. That helps with first-party JavaScript, but what about third-party JavaScript?

Scott believes easier to make a fast website than to keep a fast website. And that’s down to all the third-party scripts that people throw in: analytics, ads, tracking. They can wreak havoc on all your hard work.

These scripts apparently contribute to the business model, so it can be hard for us to make the case for removing them. Tools like SpeedCurve can help people stay informed on the impact of these scripts. It allows you to set up performance budgets and it shows you when pages go over budget. When that happens, we have leverage to step in and push back.

Assuming you lose that battle, what else can we do?

These days, lots of A/B testing and personalisation happens on the client side. The tooling is easy to use. But they are costly!

A typical problematic pattern is this: the server sends one version of the page, and once the page is loaded, the whole page gets replaced with a different layout targeted at the user. This leads to a terrifying new metric that Scott calls Second Meaningful Content.

Assuming we can’t remove the madness, what can we do? We could at least not do this for first-time visits. We could load the scripts asyncronously. We can preload the scripts at the top of the page. But ideally we want to move these things to the server. Server-side A/B testing and personalisation have existed for a while now.

Scott has been experimenting with a middleware solution. There’s this idea of server workers that Cloudflare is offering. You can manipulate the page that gets sent from the server to the browser—all the things you would do for an A/B test. Scott is doing this by using comments in the HTML to demarcate which portions of the page should be filtered for testing. The server worker then deletes a block for some users, and deletes a different block for other users. Scott has written about this approach.

The point here isn’t about using Cloudflare. The broader point is that it’s much faster to do these things on the server. We need to defend our user’s time.

Another issue, other than third-party scripts, is the page weight on home pages and landing pages. Marketing teams love to fill these things with enticing rich imagery and carousels. They’re really difficult to keep performant because they change all the time. Sometimes we’re not even in control of the source code of these pages.

We can advocate for new best practices like responsive images. The srcset attribute on the img element; the picture element for when you need more control. These are great tools. What’s not so great is writing the markup. It’s confusing! Ideally we’d have a CMS drive this, but a lot of the time, landing pages fall outside of the purview of the CMS.

Scott has been using Vue.js to make a responsive image builder—a form that people can paste their URLs into, which spits out the markup to use. Anything we can do by creating tools like these really helps to defend the performance of a site.

Another thing we can do is lazy loading. Focus on the assets. The BBC homepage uses some lazy loading for images—they blink into view as your scroll down the page. They use LazySizes, which you can find on Github. You use data- attributes to list your image sources. Scott realises that LazySizes is not progressive enhancement. He wouldn’t recommend using it on all images, just some images further down the page.

But thankfully, we won’t need these workarounds soon. Soon we’ll have lazy loading in browsers. There’s a lazyload attribute that we’ll be able to set on img and iframe elements:

<img src=".." alt="..." lazyload="on">

It’s not implemented yet, but it’s coming in Chrome. It might be that this behaviour even becomes the default way of loading images in browsers.

If you dig under the hood of the implementation coming in Chrome, it actually loads all the images, but the ones being lazyloaded are only sent partially with a 206 response header. That gives enough information for the browser to lay out the page without loading the whole image initially.

To wrap up, Scott takes comfort from the fact that there are resilient patterns out there to help us. And remember, it is our job to defend the user’s experience.