In Search of Lost Time, by Tom Vanderbilt
Temporal standards bodies.
Temporal standards bodies.
Ethan highlights a classic case of the McNamara Fallacy—measuring adoption of design system components.
This describes how I iterate on The Session:
It comes down to this annoying, upsetting, stupid fact: the only way to build a great product is to use it every day, to stare at it, to hold it in your hands to feel its lumps. The data and customers will lie to you but the product never will.
This whole post reminded of the episode of the Clearleft podcast on measuring design.
The problem underlying all this is that when it comes to building a product, all data is garbage, a lie, or measuring the wrong thing. Folks will be obsessed with clicks and charts and NPS scores—the NFTs of product management—and in this sea of noise they believe they can see the product clearly. There are courses and books and talks all about measuring happiness and growth—surveys! surveys! surveys!—with everyone in the field believing that they’ve built a science when they’ve really built a cult.
Spoiler: the answer to the question in the title is a resounding “hell yeah!”
Scott brings receipts.
A fascinating interactive journey through biometrics using your face.
One of my favourite episodes of the Clearleft podcast is on measuring design. This post from Chris is a complements that episode in a sensible and practical style.
What gets measured gets done. You are what you measure. Measurement eliminates argument. If you work in an environment that puts store in these oft-quoted business adages then I urge you to take a moment to challenge your calculations. Let’s review our metrics to ensure they can stand up and be counted.
Eric’s response to Chris’s question—“What is one thing people can do to make their website better?”—dovetails nicely with my own answer:
The two real problems here are:
- Third-party assets, such as the very analytics and CRM packages you use to determine who is using your product and how they go about it. There’s no real control over the quality or amount of code they add to your site, and setting up the logic to block them loading their own third-party resources is difficult to do.
- The people who tell you to add these third-party assets. These people typically aren’t aware of the performance issues caused by the ask, or don’t care because it’s not part of the results they’re judged by.
We’ve got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.
That’d be fine if all this shit meant something and we knew how to interpret it. But it doesn’t and we don’t.
The reality is much simpler, and therefore much more complex. Most of us don’t understand how data is collected, how these mechanisms work and most importantly where and how they don’t work.
A good post by Andy on “the language of business,” which is most cases turns out to be numbers, numbers, numbers.
While it seems reasonable and fair to expect a modicum of self-awareness of why you’re employed and what business value you drive in the the context of the work you do, sometimes the incessant self-flagellation required to justify and explain this to those who hired you may be a clue to a much deeper and more troubling question at the heart of the organisation you work for.
This pairs nicely with the Clearleft podcast episode on measuring design.
I’ve noticed a trend in recent years—a trend that I’ve admittedly been part of myself—where performance-minded developers will rebuild a site and then post a screenshot of their Lighthouse score on social media to show off how fast it is.
Mea culpa! I should post my CrUX reports too.
But I’m going to respectfully decline Phil’s advice to use any of the RUM analytics providers he recommends that require me to put another script
element on my site. One third-party script is one third-party script too many.
Developers, particularly in Silicon Valley firms, are definitionally wealthy and enfranchised by world-historical standards. Like upper classes of yore, comfort (“DX”) comes with courtiers happy to declare how important comfort must surely be. It’s bunk, or at least most of it is.
As frontenders, our task is to make services that work well for all, not just the wealthy. If improvements in our tools or our comfort actually deliver improvements in that direction, so much the better. But we must never forget that measurable improvement for users is the yardstick.
Cloudfare’s alternative to Google Analytics is now available—for free—regardless of whether your a Cloudflare customer or not:
Being privacy-first means we don’t track individual users for the purposes of serving analytics. We don’t use any client-side state (like cookies or localStorage) for analytics purposes. Cloudflare also doesn’t track users over time via their IP address, User Agent string, or any other immutable attributes for the purposes of displaying analytics — we consider “fingerprinting” even more intrusive than cookies, because users have no way to opt out.
Goodhart’s Law applied to Google’s core web vitals:
If developers start to focus solely on Core Web Vitals because it is important for SEO, then some folks will undoubtedly try to game the system.
Personally, my beef with core web vitals is that they introduce even more uneccessary initialisms (see, for example, Harry’s recent post where he uses CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. I mean, WTF?).
This is an interesting project to try to rank web hosts by performance:
Real-world server response (Time to First Byte) latencies, as experienced by real-world users navigating the web.
Progressive disclosure interface patterns categorised and evaluated:
I really like the hypertext history invoked in this article.
The piece finishes with a great note on the MacNamara fallacy:
Everyone thinks metrics let us measure results. But, actually, they don’t. They measure only what they are measuring. Engagement, for example, is not something that can be measured, so we use an analogue for it. Time on page. Or clicks.
We often end up measuring what is quick, cheap, and easy to measure. Therefore, few organizations regularly conduct usability testing or customer-satisfaction surveys, but lots use analytics.
Even today, organizations often use clicks as a measure of engagement. So, all too often, they design user interfaces to generate clicks, so the system can measure them.
Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail.
Fire up Reader Mode and read this excellent article informed by data from using a typically slow connection in rural USA today. Two findings are:
- A large fraction of the web is unusable on a bad connection. Even on a good (0% packetloss, no ping spike) dialup connection, some sites won’t load.
- Some sites will use a lot of data!
This is so useful! Get instant results from Google’s Chrome User Experience Report without having to wait (or pay) for BigQuery.
Here’s an example of my site’s metrics over the last few months, complete with nice charts.
Smart advice from Harry on setting performance budgets:
They shouldn’t be aspirational, they should be preventative … my suggestion for setting a budget for any trackable metric is to take the worst data point in the past two weeks and use that as your limit
The results are in for Daniel van Berzon’s most recent experiment into accurately measuring code readability. You can read the results and read about the methodology behind them.
What over a decade of number-crunching analytics has taught me is that spending an hour writing, sharing, or helping someone is infinitely more valuable than spending that hour swimming through numbers. Moreover, trying to juice the numbers almost invariably divorces you from thinking about customers and understanding people. On the surface, it seems like a convenient proxy, but it’s not. They’re just numbers. If you’re searching for business insights, talking to real people beats raw data any day. It’s not as convenient, but when is anything worth doing convenient?