Link tags: performance

408

sparkline

Let a website be a worry stone. — Ethan Marcotte

It was a few years before I realized that worry stones had a name, that they were borrowed from cultures other and older than mine. Heck, it’s been more than a few years since I’ve even held one. But in the last few weeks, before and after launching the redesign, I’ve kept working away at this website, much as I’d distractedly run my fingers over a smooth, flat stone.

BBC - Future Media Standards & Guidelines - Accessibility Guidelines v2.0

A timely reminder:

The minimum dependency for a web site should be an internet connection and the ability to parse HTML.

Maintaining Performance - daverupert.com

In my experience, 99% of the time Web Performance boils down to two problems:

  1. “You wrote too much JavaScript.”
  2. “You have unoptimized images.”

But as Dave points out, the real issue is this:

I find that Web Performance isn’t particularly difficult once you understand the levers you can pull; reduce kilobytes, reduce requests, unblock rendering, defer scripts. But maintaining performance that’s a different story entirely…

Get Static – Eric’s Archived Thoughts

Performance matters …especially when the chips are down:

If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static. I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me. As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Visitors, Developers, or Machines

Garrett’s observation is spot-on here:

I’ve been trying to understand the appeal of these frameworks by giving them an objective chance. I’ve expanded my knowledge of JavaScript and tried to give them the benefit of the doubt. They do have their places, but the only explanation I can come up with is that developers are taking a similar approach as Ruby and focusing on developer convenience and productivity. Only, instead of Ruby’s performance being tied to the CPU level, JavaScript frameworks push the performance burden to the client.

In both cases, the tradeoff happens in the name of developer happiness and productivity, but the strategies have entirely different consequences. With Ruby, the CPU is still (mostly) the responsibility of the development team, and it can be upgraded. With JavaScript, the page weight becomes an externality pushed onto visitors.

Why 543 KB keep me up at night - Manuel Matuzović

How and when did I get to the point where I would consider a page weight of 4 MB on a large page and 500 KB on a small page normal?

This isn’t just a well-earned rant from Manuel. I mean, it *is that, but it’s also packed with practical performance advice.

Reflections on software performance - Made of Bugs

I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.

This is a fascinating look into how performance has knock-on effects beyond the obvious:

It’s probably fairly intuitive that users prefer faster software, and will have a better experience performing a given task if the tools are faster rather than slower.

What is perhaps less apparent is that having faster tools changes how users use a tool or perform a task.

This observation is particularly salient for web developers:

We have become accustomed to casually giving up factors of two or ten or more with our choices of tools and libraries, without asking if the benefits are worth it.

Web bloat

Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail.

Fire up Reader Mode and read this excellent article informed by data from using a typically slow connection in rural USA today. Two findings are:

  1. A large fraction of the web is unusable on a bad connection. Even on a good (0% packetloss, no ping spike) dialup connection, some sites won’t load.
  2. Some sites will use a lot of data!

CrUX.RUN

This is so useful! Get instant results from Google’s Chrome User Experience Report without having to wait (or pay) for BigQuery.

Here’s an example of my site’s metrics over the last few months, complete with nice charts.

Page Speed Benchmarks | SpeedCurve

This is going to be so useful for client work at Clearleft—instant snapshots of performance metrics across industries and regions!

For example, we’ve been working a lot with the travel sector, and now we can call up these benchmarks without having to generate a whole bunch of Web Page Test results ourselves.

See Tammy’s blog post for me details.

Innovation Can’t Keep the Web Fast | CSS-Tricks

I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people.

Google Maps Hacks, Performance and Installation, 2020 By Simon Weckert

I can’t decide if this is industrial sabotage or political protest. Either way, I like it.

99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic

Lightning-Fast Web Performance: an online lecture series from Scott Jehl

You know that this online course from Scott is going to be excellent—get in there!

Software disenchantment @ tonsky.me

I want to deliver working, stable things. To do that, we need to understand what we are building, in and out, and that’s impossible to do in bloated, over-engineered systems.

This pairs nicely with Craig’s post on fast software.

Everyone is busy building stuff for right now, today, rarely for tomorrow. But it would be nice to also have stuff that lasts a little longer than that.

I just got a new laptop and I decided to go with fresh installs rather than a migration. This really resonates:

It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs.

Smaller HTML Payloads with Service Workers — Philip Walton

This is a great progressive enhancement for performance that uses a service worker to combine reusable bits of a page with fresh content. The numbers are very convincing!

Alas, the code is using the Workbox library, but figuring out the vanilla code to write shouldn’t be too tricky seeing as Philip talks through his logic step by step.

Performance Budgets, Pragmatically – CSS Wizardry

Smart advice from Harry on setting performance budgets:

They shouldn’t be aspirational, they should be preventative … my suggestion for setting a budget for any trackable metric is to take the worst data point in the past two weeks and use that as your limit

How creating a Progressive Web App has made our website better for people and planet

Creating a PWA has saved a lot of kilobytes after the initial load by storing files on the device to reuse on subsequent requests – this in turn lowers the load time and carbon footprint on subsequent page views, making the website better for both people and planet. We’ve also enabled offline access, which significantly improves user experience for people in areas with patchy connections, such as mobile users on their commute.

Move Fast & Don’t Break Things | Filament Group, Inc.

This is the transcript of a brilliant presentation by Scott—read the whole thing! It starts with a much-needed history lesson that gets to where we are now with the dismal state of performance on the web, and then gives a whole truckload of handy tips and tricks for improving performance when it comes to styles, scripts, images, fonts, and just about everything on the front end.

Essential!

This Page is Designed to Last: A Manifesto for Preserving Content on the Web

Geocities, LiveJournal, what.cd, now Yahoo Groups. One day, Medium, Twitter, and even hosting services like GitHub Pages will be plundered then discarded when they can no longer grow or cannot find a working business model.

Considering the needs of someone who wants to make and maintain a website, without the ridiculous complexity of “modern” web tooling:

How do we make web content that can last and be maintained for at least 10 years? As someone studying human-computer interaction, I naturally think of the stakeholders we aren’t supporting. Right now putting up web content is optimized for either the professional web developer (who use the latest frameworks and workflows) or the non-tech savvy user (who use a platform).