Tags: performance

361

sparkline

Friday, January 11th, 2019

It’s What You Make, Not How You Make It.

How did I miss this great post from 2016 by one of my favourite people‽ It’s even more more relevant today.

To me it doesn’t matter whether you write your HTML and CSS by hand or use JavaScript to generate it for you. What matters is the output, how it is structured, and how it is served to the client. When we allow our tools to take precedent over the quality of our output the entire web suffers. Sites are likely to be less accessible, less performant, and suffer from poor semantics.

Wednesday, January 9th, 2019

The Ethics of Performance - TimKadlec.com

When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.

Wednesday, December 19th, 2018

Performance Calendar » All about prefetching

A good roundup of techniques for responsible prefetching from Katie Hempenius.

Friday, December 14th, 2018

GitHub - GoogleChromeLabs/quicklink: ⚡️Faster subsequent page-loads by prefetching in-viewport links during idle time

This looks like a very handle little performance-enhancing script: it attempts to prefetch some links, but in a responsible way. It won’t do any prefetching on slow connections or where data saving is enabled, and it only prefetches when the browser is idle.

Monday, November 26th, 2018

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

Tuesday, November 13th, 2018

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Inlining or Caching? Both Please! | Filament Group, Inc., Boston, MA

This just blew my mind! A fiendishly clever pattern that allows you to inline resources (like critical CSS) and cache that same content for later retrieval by a service worker.

Crazy clever!

Monday, November 12th, 2018

Squoosh

A handy in-browser image compression tool. Drag, drop, tweak, and export.

Home  |  web.dev

I guess this domain name is why our local developmemnt environments stopped working.

Anyway, it’s a web interface onto Lighthouse (note that it has the same bugs as the version of Lighthouse in Chrome). Kind of like webhint.io.

Saturday, November 10th, 2018

CSS and Network Performance – CSS Wizardry

Harry takes a look at the performance implications of loading CSS. To be clear, this is not about the performance of CSS selectors or ordering (which really doesn’t make any difference at this point), but rather it’s about the different ways of getting rid of as much render-blocking CSS as possible.

…a good rule of thumb to remember is that your page will only render as quickly as your slowest stylesheet.

Thursday, November 8th, 2018

Designing, laws, and attitudes. — Ethan Marcotte

Ethan ponders what the web might be like if the kind of legal sticks that exist for accessibility in some countries also existed for performance.

Sunday, October 28th, 2018

The Three Types of Performance Testing – CSS Wizardry

Harry divides his web performance work into three categories:

  1. Proactive
  2. Reactive
  3. Passive

I feel like a lot of businesses are still unsure where to even start when it comes to performance monitoring, and as such, they never do. By demystifying it and breaking it down into three clear categories, each with their own distinct time, place, and purpose, it immediately takes a lot of the effort away from them: rather than worrying what their strategy should be, they now simply need to ask ‘Do we have one?’

Tuesday, October 9th, 2018

AddyOsmani.com - Start Performance Budgeting

Great ideas from Addy on where to start with creating a performance budget that can act as a red line you don’t want to cross.

If it’s worth getting fast, it’s worth staying fast.

Monday, October 8th, 2018

The Hurricane Web | Max Böck - Frontend Web Developer

When a storm comes, some of the big news sites like CNN and NPR strip down to a zippy performant text-only version that delivers the content without the bells and whistles.

I’d argue though that in some aspects, they are actually better than the original.

The numbers:

The “full” NPR site in comparison takes ~114 requests and weighs close to 3MB on average. Time to first paint is around 20 seconds on slow connections. It includes ads, analytics, tracking scripts and social media widgets.

Meanwhile, the actual news content is roughly the same.

I quite like the idea of storm-driven development.

…websites built for a storm do not rely on Javascript. The benefit simply does not outweigh the cost. They rely on resilient HTML, because that’s all that is really necessary here.

Friday, September 28th, 2018

DasSur.ma – The 9am rush hour

Why JavaScript is a real performance bottleneck:

Rush hour. The worst time of day to travel. For many it’s not possible to travel at any other time of day because they need to get to work by 9am.

This is exactly what a lot of web code looks like today: everything runs on a single thread, the main thread, and the traffic is bad. In fact, it’s even more extreme than that: there’s one lane all from the city center to the outskirts, and quite literally everyone is on the road, even if they don’t need to be at the office by 9am.

Thoughts on Offline-first | Trys Mudford

Service Workers have such huge potential power, and I feel like we (developers on the web) have barely scratched the surface with what’s possible.

Needless to say, I couldn’t agree more!

Trys is thinking through some of the implicatons of service workers, like how we refresh stale content, and how we deal with slow networks—something that’s actually more of a challenge than dealing with no network connection at all.

There’s some good food for thought here.

I’m so excited to see how we can use Service Workers to improve the web.

Wednesday, September 26th, 2018

How to Build a Low-tech Website?

This is fascinating! A website that’s fast and nimble, not for performance reasons, but to reduce energy consumption. It’s using static files, system fonts and dithered images. And no third-party scripts.

Thanks to a low-tech web design, we managed to decrease the average page size of the blog by a factor of five compared to the old design – all while making the website visually more attractive (and mobile-friendly). Secondly, our new website runs 100% on solar power, not just in words, but in reality: it has its own energy storage and will go off-line during longer periods of cloudy weather.

Ping! That’s the sound of my brain going “service worker!”

I’ve sent them an email offering my help.

Thursday, September 20th, 2018

The costs and benefits of tracking scripts – business vs. user // Sebastian Greger

I am having a hard time seeing the business benefits weighing in more than the user cost (at least for those many organisations out there who rarely ever put that data to proper use). After all, keeping the costs low for the user should be in the core interest of the business as well.

A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

Tuesday, September 18th, 2018

Winston Hearn | What’s best for users

The incentives that Google technology created were very important in the evolution of this current stage of the web. I think we should be skeptical of AMP because once again a single company’s technology – the same single company – is creating the incentives for where we go next.

A thorough examination of the incentives that led to AMP, and the dangers of what could happen next:

I’m not sure I am yet willing to cede the web to a single monopolized company.