It was a few years before I realized that worry stones had a name, that they were borrowed from cultures other and older than mine. Heck, it’s been more than a few years since I’ve even held one. But in the last few weeks, before and after launching the redesign, I’ve kept working away at this website, much as I’d distractedly run my fingers over a smooth, flat stone.
The minimum dependency for a web site should be an internet connection and the ability to parse HTML.
In my experience, 99% of the time Web Performance boils down to two problems:
- “You have unoptimized images.”
But as Dave points out, the real issue is this:
I find that Web Performance isn’t particularly difficult once you understand the levers you can pull; reduce kilobytes, reduce requests, unblock rendering, defer scripts. But maintaining performance that’s a different story entirely…
Performance matters …especially when the chips are down:
If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static. I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me. As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.
Garrett’s observation is spot-on here:
How and when did I get to the point where I would consider a page weight of 4 MB on a large page and 500 KB on a small page normal?
This isn’t just a well-earned rant from Manuel. I mean, it *is that, but it’s also packed with practical performance advice.
I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.
This is a fascinating look into how performance has knock-on effects beyond the obvious:
It’s probably fairly intuitive that users prefer faster software, and will have a better experience performing a given task if the tools are faster rather than slower.
What is perhaps less apparent is that having faster tools changes how users use a tool or perform a task.
This observation is particularly salient for web developers:
We have become accustomed to casually giving up factors of two or ten or more with our choices of tools and libraries, without asking if the benefits are worth it.
Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail.
Fire up Reader Mode and read this excellent article informed by data from using a typically slow connection in rural USA today. Two findings are:
- A large fraction of the web is unusable on a bad connection. Even on a good (0% packetloss, no ping spike) dialup connection, some sites won’t load.
- Some sites will use a lot of data!
This is going to be so useful for client work at Clearleft—instant snapshots of performance metrics across industries and regions!
See Tammy’s blog post for me details.
I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.
It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people.
I can’t decide if this is industrial sabotage or political protest. Either way, I like it.
99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic
You know that this online course from Scott is going to be excellent—get in there!
I want to deliver working, stable things. To do that, we need to understand what we are building, in and out, and that’s impossible to do in bloated, over-engineered systems.
This pairs nicely with Craig’s post on fast software.
Everyone is busy building stuff for right now, today, rarely for tomorrow. But it would be nice to also have stuff that lasts a little longer than that.
I just got a new laptop and I decided to go with fresh installs rather than a migration. This really resonates:
It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs.
This is a great progressive enhancement for performance that uses a service worker to combine reusable bits of a page with fresh content. The numbers are very convincing!
Alas, the code is using the Workbox library, but figuring out the vanilla code to write shouldn’t be too tricky seeing as Philip talks through his logic step by step.
Smart advice from Harry on setting performance budgets:
They shouldn’t be aspirational, they should be preventative … my suggestion for setting a budget for any trackable metric is to take the worst data point in the past two weeks and use that as your limit
Creating a PWA has saved a lot of kilobytes after the initial load by storing files on the device to reuse on subsequent requests – this in turn lowers the load time and carbon footprint on subsequent page views, making the website better for both people and planet. We’ve also enabled offline access, which significantly improves user experience for people in areas with patchy connections, such as mobile users on their commute.
This is the transcript of a brilliant presentation by Scott—read the whole thing! It starts with a much-needed history lesson that gets to where we are now with the dismal state of performance on the web, and then gives a whole truckload of handy tips and tricks for improving performance when it comes to styles, scripts, images, fonts, and just about everything on the front end.
Geocities, LiveJournal, what.cd, now Yahoo Groups. One day, Medium, Twitter, and even hosting services like GitHub Pages will be plundered then discarded when they can no longer grow or cannot find a working business model.
Considering the needs of someone who wants to make and maintain a website, without the ridiculous complexity of “modern” web tooling:
How do we make web content that can last and be maintained for at least 10 years? As someone studying human-computer interaction, I naturally think of the stakeholders we aren’t supporting. Right now putting up web content is optimized for either the professional web developer (who use the latest frameworks and workflows) or the non-tech savvy user (who use a platform).