Tags: speed

92

sparkline

Monday, May 21st, 2018

Identifying, Auditing, and Discussing Third Parties – CSS Wizardry

Harry describes the process he uses for auditing the effects of third-party scripts. He uses the excellent Request Map which was mentioned multiple times at the Delta V conference.

The focus here is on performance, but these tools are equally useful for shining a light on just how bad the situation is with online surveillance and tracking.

Monday, April 23rd, 2018

The Fast and Slow of Design

Prompted by his recent talk at Smashing Conference, Mark explains why he’s all about the pace layers when it comes to design systems. It’s good stuff, and ties in nicely with my recent (pace layers obsessed) talk at An Event Apart.

Structure for pace. Move at the appropriate speed.

Wednesday, April 18th, 2018

The three Rs of web performance

  1. Reduce
  2. Reuse
  3. Restructure

The slides from Matthew’s talk on the performance overhauls he did on FixMyStreet.com and TrainTimes.org.uk.

Tuesday, March 20th, 2018

How Fast Is Amp Really? - TimKadlec.com

An excellent, thorough, even-handed analysis of AMP’s performance from Tim. The AMP format doesn’t make that much of a difference, the AMP cache does speed things up (as would any CDN), but it’s the pre-rendering that really delivers the performance boost …as long as you give up your URLs.

But right now, the incentives being placed on AMP content seem to be accomplishing exactly what you would think: they’re incentivizing AMP, not performance.

Saturday, March 10th, 2018

It’s Dangerous to Go Stallone. Take Glyphhanger | Filament Group, Inc., Boston, MA

You’ll need to be comfortable with using the command line, but this is a very useful font subsetting tool from those clever folks at Filament Group.

Thursday, March 8th, 2018

Standardizing lessons learned from AMP – Accelerated Mobile Pages Project

This is very good news indeed—Google are going to allow non-AMP pages to get the same prioritised treatment as AMP pages …if they comply with the kind of performance criteria that Tim outlined.

It’ll take time to get there, but I’m so, so glad to see that Google aren’t going to try to force everyone to use their own proprietary format.

We are taking what we learned from AMP, and are working on web standards that will allow instant loading for non-AMP web content. We hope this work will also unlock AMP-like embeddability that powers Google Search features like the Top Stories carousel.

I just hope that this alternate route to the carousel won’t get lumped under the banner of “AMP”—that term has been pretty much poisoned at this point.

Monday, March 5th, 2018

Winning on Mobile

This looks like a handy tool for doing some quick’n’dirty competitor analysis when it comes to performance: create a scoreboard of sites to rank by speed (and calculate the potential revenue impact).

Many factors contribute to an engaging mobile experience. And speed is chief among them. Most people will abandon a mobile site visit if the page takes more than a few seconds to load. Use our Speed Scorecard to see how your site stacks up to the competition.

Tuesday, February 27th, 2018

Moving Slowly and Fixing Things / Paul Robert Lloyd

Although design gets conflated with creation, its the act of improving what already exists — organising a room, editing a text, refining an interface, refactoring a codebase — that I enjoy the most.

Wednesday, February 14th, 2018

AMP: the missing controversy – Ferdy Christant

AMP pages aren’t fast because of the AMP format. AMP pages are fast when you visit one via Google search …because of Google’s monopoly on preloading:

Technically, a clever trick. It’s hard to argue with that. Yet I consider it cheating and anti competitive behavior.

Preloading is exclusive to AMP. Google does not preload non-AMP pages. If Google would have a genuine interest in speeding up the whole web on mobile, it could simply preload resources of non-AMP pages as well. Not doing this is a strong hint that another agenda is at work, to say the least.

Sunday, February 11th, 2018

Seva Zaikov - Single Page Application Is Not a Silver Bullet

Harsh (but fair) assessment of the performance costs of doing everything on the client side.

Friday, January 19th, 2018

Heisenberg

I wrote about Google Analytics yesterday. As usual, I syndicated the post to Ev’s blog, and I got an interesting response over there. Kelly Burgett set me straight on some of the finer details of how goals work, and finished with this thought:

You mention “delivering a performant, accessible, responsive, scalable website isn’t enough” as if it should be, and I have to disagree. It’s not enough for a business to simply have a great website if you are unable to understand performance of channel marketing, track user demographics and behavior on-site, and optimize your site/brand based on that data. I’ve seen a lot of ugly sites who have done exceptionally well in terms of ROI, simply because they are getting the data they need from the site in order make better business decisions. If your site cannot do that (ie. through data collection, often third party scripts), then your beautifully-designed site can only take you so far.

That makes an excellent case for having analytics. But that’s not necessarily the same as having Google analytics, or even JavaScript-driven analytics at all.

By far the most useful information you get from analytics is around where people have come from, where did they go next, and what kind of device are they using. None of that information requires JavaScript. It’s all available from your server logs.

I don’t want to come across all old-man-yell-at-cloud here, but I’m trying to remember at what point self-hosted software for analysing your log traffic became not good enough.

Here’s the thing: logging on the server has no effect on the user experience. It’s basically free, in terms of performance. Logging via JavaScript, by its very nature, has some cost. Even if its negligible, that’s one more request, and that’s one more bit of processing for the CPU.

All of the data that you can only get via JavaScript (in-page actions, heat maps, etc.) are, in my experience, better handled by dedicated software. To me, that kind of more precise data feels different to analytics in the sense of funnels, conversions, goals and all that stuff.

So in order to get more fine-grained data to analyse, our analytics software has now doubled down on a technology—JavaScript—that has an impact on the end user, where previously the act of observation could be done at a distance.

There are also blind spots that come with JavaScript-based tracking. According to Google Analytics, 0% of your customers don’t have JavaScript. That’s not necessarily true, but there’s literally no way for Google Analytics—which relies on JavaScript—to even do its job in the absence of JavaScript. That can lead to a dangerous situation where you might be led to think that 100% of your potential customers are getting by, when actually a proportion might be struggling, but you’ll never find out about it.

Related: according to Google Analytics, 0% of your customers are using ad-blockers that block requests to Google’s servers. Again, that’s not necessarily a true fact.

So I completely agree than analytics are a good thing to have for your business. But it does not follow that Google Analytics is a good thing for your business. Other options are available.

I feel like the assumption that “analytics = Google Analytics” is like the slippery slope in reverse. If we’re all agreed that analytics are important, then aren’t we also all agreed that JavaScript-based tracking is important?

In a word, no.

This reminds me of the arguments made in favour of intrusive, bloated advertising scripts. All of the arguments focus on the need for advertising—to stay in business, to pay the writers—which are all great reasons for advertising, but have nothing to do with JavaScript, which is at the root of the problem. Everyone I know who uses an ad-blocker—including me—doesn’t use it to stop seeing adverts, but to stop the performance of the page being degraded (and to avoid being tracked across domains).

So let’s not confuse the means with the ends. If you need to have advertising, that doesn’t mean you need to have horribly bloated JavaScript-based advertising. If you need analytics, that doesn’t mean you need an analytics script on your front end.

Thursday, January 18th, 2018

Analysing analytics

Hell is other people’s JavaScript.

There’s nothing quite so crushing as building a beautifully performant website only to have it infested with a plague of third-party scripts that add to the weight of each page and reduce the responsiveness, making a mockery of your well-considered performance budget.

Trent has been writing about this:

My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

He’s started the process by itemising third-party scripts. Frustratingly though, there’s rarely one single culprit that you can point to—it’s the cumulative effect of “just one more beacon” and “just one more analytics script” and “just one more A/B testing tool” that adds up to a crappy experience that warms your user’s hands by ensuring your site is constantly draining their battery.

Actually, having just said that there’s rarely one single culprit, Adobe Tag Manager is often at the root of third-party problems. That and adverts. It’s like opening the door of your beautifully curated dream home, and inviting a pack of diarrhetic elephants in: “Please, crap wherever you like.”

But even the more well-behaved third-party scripts can get out of hand. Google Analytics is so ubiquitous that it’s hardly even considered in the list of potentially harmful third-party scripts. On the whole, it’s a fairly well-behaved citizen of your site’s population of third-party scripts (y’know, leaving aside the whole surveillance capitalism business model that allows you to use such a useful tool for free in exchange for Google tracking your site’s visitors across the web and selling the insights from that data to advertisers).

The initial analytics script that you—asynchronously—load into your page isn’t very big. But depending on how you’ve configured your Google Analytics account, that might just be the start of a longer chain of downloads and event handlers.

Ed recently gave a lunchtime presentation at Clearleft on using Google Analytics—he professes modesty but he really knows his stuff. He was making sure that everyone knew how to set up goals’n’stuff.

As I understand it, there are two main categories of goals: events and destinations (there are also durations and pages, but they feel similar to destinations). You use events to answer questions like “Did the user click on this button?” or “Did the user click on that search field?”. You use destinations to answer questions like “Did the user arrive at this page?” or “Did the user come from that page?”

You can add as many goals to your site’s analytics as you want. That’s an intoxicating offer. The problem is that there is potentially a cost for each goal you create. It’s an invisible cost. It’s paid by the user in the currency of JavaScript sent down the wire (I wish that the Google Analytics admin interface were more like the old interface for Google Fonts, where each extra file you added literally pushed a needle higher on a dial).

It strikes me that the event-based goals would necessarily require more JavaScript in order to listen out for those clicks and fire off that information. The destination-based goals should be able to get all the information needed from regular page navigations.

So I have a hypothesis. I think that destination-based goals are less harmful to performance than event-based goals. I might well be wrong about that, and if I am, please let me know.

With that hypothesis in mind, and until I learn otherwise, I’ve got two rules of thumb to offer when it comes to using Google Analytics:

  1. Try to keep the number of goals to a minimum.
  2. If you must create a goal, favour destinations over events.

Saturday, January 13th, 2018

The Human Computer’s Dreams Of The Future by Ida Rhodes (PDF)

From the proceedings of the Electronic Computer Symposium in 1952, the remarkable Ida Rhodes describes a vision of the future…

My crystal ball reveals Mrs. Mary Jones in the living room of her home, most of the walls doubling as screens for projected art or information. She has just dialed her visiophone. On the wall panel facing her, the full colored image of a rare orchid fades, to be replaced by the figure of Mr. Brown seated at his desk. Mrs. Jones states her business: she wishes her valuable collection of orchid plants insured. Mr. Brown consults a small code book and dials a string of figures. A green light appears on his wall. He asks Mrs. Jones a few pertinent questions and types out her replies. He then pushes the start button. Mr. Brown fades from view. Instead, Mrs. Jones has now in front of her a set of figures relating to the policy in which she is interested. The premium rate and benefits are acceptable and she agrees to take out the policy. Here is Brown again. From a pocket in his wall emerges a sealed, addressed, and postage-metered envelope which drops into the mailing chute. It contains, says Brown, an application form completely filled out by the automatic computer and ready for her signature.

Monday, November 27th, 2017

Network based image loading using the Network Information API in Service Worker | justmarkup

This is clever—you can use the navigator.connection API from a service worker (because it’s asynchronous) which means you can have a service worker script that serves differently sized images based on bandwidth.

The Fallacies of Distributed Computing (Applied to Front-End Performance) – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts

Harry cautions against making assumptions about the network when it comes to front-end development:

Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run.

Planning for the worst-case scenario is never a wasted effort:

If you build and structure applications such that they survive adverse conditions, then they will thrive in favourable ones.

Saturday, November 18th, 2017

Using SVG as placeholders — More Image Loading Techniques - JMPerez Blog

Here’s a clever to technique to improve the perceived performance of image loading with a polygonal SVG placeholder.

Sunday, October 29th, 2017

Can You Afford It?: Real-world Web Performance Budgets – Infrequently Noted

Alex looks at the mindset and approaches you need to adopt to make a performant site. There’s some great advice in here for setting performance budgets for JavaScript.

JavaScript is the single most expensive part of any page in ways that are a function of both network capacity and device speed. For developers and decision makers with fast phones on fast networks this is a double-whammy of hidden costs.

Monday, October 9th, 2017

“async” attribute on img, and corresponding “ready” event · Issue #1920 · whatwg/html

It looks like the async attribute is going to ship in Chrome for img elements:

This attribute would have two states:

  • “on”: This indicates that the developer prefers responsiveness and performance over atomic presentation of content.
  • “off”: This indicates that the developer prefers atomic presentation of content over responsiveness.

Monday, October 2nd, 2017

eBay’s Font Loading Strategy | eBay Tech Blog

Here’s the flow that eBay use for the font-loading. They’ve decided that on the very first page view, seeing a system font is an acceptable trade-off. I think that makes sense for their situation.

Interestingly, they set a flag for subsequent visits using localStorage rather than a cookie. I wonder why that is? For me, the ability to read cookies on the server as well as the client make them quite handy for situations like this.

Monday, September 25th, 2017

Network Information API

It looks like this is landing in Chrome. The navigator.connection.type property will allow us to progressively enhance based on connection type:

A web application that makes use of a service worker to cache resources during installation might have different bundles of assets that it might cache: a list of crucial assets that are cached unconditionally, and a bundle of larger, optional assets that are only cached ahead of time when navigator.connection.type is 'ethernet' or 'wifi'.

There are potential security issues around fingerprinting that are addressed in this document.