Tags: speed

56

sparkline

Monday, April 10th, 2017

Web Performance Optimization Stats

If you ever need to pull up some case studies to demonstrate the business benefits of performance, Tammy and Tim have you covered.

Saturday, April 1st, 2017

AMP: breaking news | Andrew Betts

A wide-ranging post from Andrew on the downsides of Google’s AMP solution.

I don’t agree with all the issues he has with the format itself (in my opinion, the fact that AMP pages can’t have script elements is a feature, not a bug), but I wholeheartedly concur with his concerns about the AMP cache:

It recklessly devalues the URL

Spot on! And as Andrew points out, in this age of fake news, devaluing the URL is a recipe for disaster.

It’s hard to avoid the idea that the primary objective of AMP is really about hosting publisher content inside the Google ecosystem (as is more obviously the objective of Facebook Instant Articles and Apple News).

Thursday, March 23rd, 2017

Need to Catch Up on the AMP Debate? | CSS-Tricks

Funnily enough, I led a brown bag lunch discussion about AMP at work just the other day. A lot of it mirrored Chris’s thoughts here. It’s a complicated situation that has lots of people worried.

Monday, March 13th, 2017

In AMP we trust

AMP Conf was one of those deep dive events, with two days dedicated to one single technology: AMP.

Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:

  1. The AMP format. A bunch of web components. For instance, instead of using an img element on an AMP page, you use an amp-img element instead.
  2. The AMP rules. There’s one JavaScript file, hosted on Google’s servers, that turns those web components from spans into working elements. No other JavaScript is allowed. All your styles must be in a style element instead of an external file, and there’s a limit on what you can do with those styles.
  3. The AMP cache. The source of most confusion—and even downright enmity—this is what’s behind the fact that when you launch an AMP result from Google search, you don’t go to another website. You see Google’s cached copy of the page instead of the original.

The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the img element might have some performance issues, the amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.

The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.

At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.

The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.

From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.

Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.

The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.

But wait! Couldn’t Google pre-render the page at its original URL?

Yes, they could. But they won’t.

This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.

By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.

This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.

This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?

Well, there is one non-cache reason to use AMP and it’s a political reason. Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role. Sarah pointed this out on the panel we were on, and she was spot on.

Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.

So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.

This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).

Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.

Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.

Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.

To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.

Wednesday, March 8th, 2017

AMP and the Web - TimKadlec.com

Tim watched the panel discussion at AMP Conf. He has opinions.

Optimistically, AMP may be a stepping stone to a better performant web. But I still can’t shake the feeling that, in its current form, it’s more of a detour.

AMP Conf: Day 1 Live Stream - YouTube

Here’s the panel I was on at the AMP conference. It was an honour and a pleasure to share the stage with Nicole, Sarah, Gina, and Mike.

Saturday, February 18th, 2017

Base64 Encoding & Performance, Part 1: What’s Up with Base64?

Harry clearly outlines the performance problems of Base64 encoding images in stylesheets. He’s got a follow-up post with sample data.

Thursday, February 9th, 2017

Performance Under Pressure - performance, responsive web design - Bocoup

The transcript of a really great—and entertaining—talk on performance by Wilto. I may have laughed out loud at points.

Most of the web really sucks if you have a slow connection

Just like many people develop with an average connection speed in mind, many people have a fixed view of who a user is. Maybe they think there are customers with a lot of money with fast connections and customers who won’t spend money on slow connections. That is, very roughly speaking, perhaps true on average, but sites don’t operate on average, they operate in particular domains.

Sunday, January 15th, 2017

Modernizing our Progressive Enhancement Delivery | Filament Group, Inc., Boston, MA

Scott runs through the latest improvements to the Filament Group website. There’s a lot about HTTP2, but also a dab of service workers (using a similar recipe to my site).

Saturday, December 24th, 2016

10 things I learned making the fastest site in the world

Behind the amusing banter there’s some really solid performance advice in here. Good stuff.

Client Side Rendering (CSR), or as I call it “setting money on fire and throwing it in a river” has its uses, but for this site would have been madness.

Wednesday, September 28th, 2016

Intervening against document.write() | Web Updates - Google Developers

Chrome is going to refuse to parse document.write for users on a slow connection. On the one hand, I feel that Google intervening in this way is a bit icky, but I on the other hand, I totally support this move.

This keeps happening. Google announce a change (usually related to search) where I think “Ooh, that could be interpreted as an abuse of a monopoly position …but it’s for ver good reason so I’ll keep quiet.”

Anyway, this should serve as a good kick in the pants for bad actors (that’s you, advertisers) to update their scripts to be asynchronous.

Wednesday, September 21st, 2016

SpeedCurve | PWA Performance

Steve describes a script you can use on WebPageTest to simulate going offline so you can test how your progressive web app performs.

Saturday, August 6th, 2016

Official Google Webmaster Central Blog: AMP your content - A Preview of AMP’ed results in Search

Google’s search results now include AMP pages in the regular list of results (not just in a carousel). They’re marked with a little grey lightning bolt next to the word AMP.

The experience of opening of those results is certainly fast, but feels a little weird—like you haven’t really gone to the website yet, but instead that you’re still tethered to the search results page.

Clicking on a link within an AMP spawns a new window, which reinforces the idea of the web as something separate to AMP (much as they still like to claim that AMP is “a subset of HTML”—at this point, it really, really isn’t).

Tuesday, May 31st, 2016

turbolinks/turbolinks: Turbolinks makes navigating your web application faster

I really, really like the approach that this JavaScript library is taking in treating Ajax as a progressive enhancement:

Turbolinks intercepts all clicks on a href links to the same domain. When you click an eligible link, Turbolinks prevents the browser from following it. Instead, Turbolinks changes the browser’s URL using the History API, requests the new page using XMLHttpRequest, and then renders the HTML response.

During rendering, Turbolinks replaces the current body element outright and merges the contents of the head element. The JavaScript window and document objects, and the HTML html element, persist from one rendering to the next.

Here’s the mustard it’s cutting:

It depends on the HTML5 History API and Window.requestAnimationFrame. In unsupported browsers, Turbolinks gracefully degrades to standard navigation.

This approach matches my own mental model for building on the web—I might try playing around with this on some of my projects.

Thursday, April 14th, 2016

A faster FT.com – Engine Room

A data-driven look at impact of performance:

The data suggests, both in terms of user experience and financial impact, that there are clear and highly valued benefits in making the site even faster.

Tuesday, March 15th, 2016

Managing Mobile Performance Optimization – Smashing Magazine

Some solid sensible advice on optimising performance.

Friday, December 18th, 2015

Building the UX London website by Charlotte Jackson

Charlotte talks through some of the techniques she used when she was building the site for this year’s UX London, with a particular emphasis on improving perceived performance.

Wednesday, December 16th, 2015

More Responsive Tapping on iOS | WebKit

This solution to the mobile tap delay by the WebKit team sounds like what I was hoping for:

Putting touch-action: manipulation; on a clickable element makes WebKit consider touches that begin on the element only for the purposes of panning and pinching to zoom. This means WebKit does not consider double-tap gestures on the element, so single taps are dispatched immediately.

It would be nice to know whether this has been discussed with other browser makers or if it’s another proprietary addition.

Sunday, December 13th, 2015

Smaller, Faster Websites - - Bocoup

The transcript of a great talk by Wilto, focusing on responsive images, inlining critical CSS, and webfont loading.

When we present users with a slow website, a loading spinner, laggy webfonts—or tell them outright that they‘re not using a website the right way—we’re breaking the fourth wall. We’ve gone so far as to invent an arbitary line between “webapp” and “website” so we could justify these decisions to ourselves: “well, but, this is a web app. It… it has… JSON. The people that can’t use the thing I built? They don’t get a say.”

We, as an industry, have nearly decided that we’re doing a great job as long as we don’t count the cases where we’re doing a terrible job.