The discussions around data policy still feel like they are framing data as oil - as a vast, passive resource that either needs to be exploited or protected. But this data isn’t dead fish from millions of years ago - it’s the thoughts, emotions and behaviours of over a third of the world’s population, the largest record of human thought and activity ever collected. It’s not oil, it’s history. It’s people. It’s us.
I hadn’t come across this before—run Lighthouse tests on your pages from six different locations around the world at once.
This is a great HTML boilerplate, with an explanation of every line.
You catch more flies with honey than Tailwind.
Cloudfare’s alternative to Google Analytics is now available—for free—regardless of whether your a Cloudflare customer or not:
Being privacy-first means we don’t track individual users for the purposes of serving analytics. We don’t use any client-side state (like cookies or localStorage) for analytics purposes. Cloudflare also doesn’t track users over time via their IP address, User Agent string, or any other immutable attributes for the purposes of displaying analytics — we consider “fingerprinting” even more intrusive than cookies, because users have no way to opt out.
My favorite aspect of websites is their duality: they’re both subject and object at once. In other words, a website creator becomes both author and architect simultaneously. There are endless possibilities as to what a website could be. What kind of room is a website? Or is a website more like a house? A boat? A cloud? A garden? A puddle? Whatever it is, there’s potential for a self-reflexive feedback loop: when you put energy into a website, in turn the website helps form your own identity.
Goodhart’s Law applied to Google’s core web vitals:
If developers start to focus solely on Core Web Vitals because it is important for SEO, then some folks will undoubtedly try to game the system.
Personally, my beef with core web vitals is that they introduce even more uneccessary initialisms (see, for example, Harry’s recent post where he uses CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. I mean, WTF?).
James made a radio programme about “the cloud”:
It’s the central metaphor of the internet - ethereal and benign, a fluffy icon on screens and smartphones, the digital cloud has become so naturalised in our everyday life we look right through it. But clouds can also obscure and conceal – what is it hiding? Author and technologist James Bridle navigates the history and politics of the cloud, explores the power of its metaphor and guides us back down to earth.
This is a handy tool if you’re messing around with Twitter cards and other metacrap.
A trashcan, a tyepface, and a tactile keyboard. Marcin gets obsessive (as usual).
Good point. When we talk about perceived performance, the perception in question is almost always visual. We should think more inclusively than that.
I really, really like Andy’s approach here:
The focus of the methodology is utilising the power of CSS and the web platform as a whole, with some added controls and structures that help to keep things a bit more maintainable and predictable. The end-goal is shipping as little CSS as possible—leaning heavily into progressive enhancement and modern techniques.
If you use the cascade for everything, you’re going to run into trouble. But equally, micro-managing styles on every element will also get you into trouble. I think Andy’s found a really great sweet spot here that gets the balance just right.
CUBE CSS in essence, is a progressive enhancement approach, vs a fight against the grain of CSS or a pixel-pushing your project to within an inch of its life approach.
Yes! It feels very “webby” to me.
Progressive disclosure interface patterns categorised and evaluated:
- mouseover popups (just say no!),
- new pages,
- scrolling sideways.
I really like the hypertext history invoked in this article.
The piece finishes with a great note on the MacNamara fallacy:
Everyone thinks metrics let us measure results. But, actually, they don’t. They measure only what they are measuring. Engagement, for example, is not something that can be measured, so we use an analogue for it. Time on page. Or clicks.
We often end up measuring what is quick, cheap, and easy to measure. Therefore, few organizations regularly conduct usability testing or customer-satisfaction surveys, but lots use analytics.
Even today, organizations often use clicks as a measure of engagement. So, all too often, they design user interfaces to generate clicks, so the system can measure them.
Chromium browsers—Chrome, Edge, et al.—are getting a much-needed update to some interface elements like the
progess element, the
meter element, and the
color input types.
Pages are often designed so that they’re hard or impossible to read if some dependency fails to load. On a slow connection, it’s quite common for at least one depedency to fail.
Fire up Reader Mode and read this excellent article informed by data from using a typically slow connection in rural USA today. Two findings are:
- A large fraction of the web is unusable on a bad connection. Even on a good (0% packetloss, no ping spike) dialup connection, some sites won’t load.
- Some sites will use a lot of data!
This is going to be so useful for client work at Clearleft—instant snapshots of performance metrics across industries and regions!
See Tammy’s blog post for me details.
We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it.
Dave follows on from my post about design systems and automation.
At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.
Smart advice from Harry on setting performance budgets:
They shouldn’t be aspirational, they should be preventative … my suggestion for setting a budget for any trackable metric is to take the worst data point in the past two weeks and use that as your limit