I’d maybe simplify this people problem a bit: the codebase is easy to change, but the incentives within a company are not. And yet it’s the incentives that drive what kind of code gets written — what is acceptable, what needs to get fixed, how people work together. In short, we cannot be expected to fix the code without fixing the organization, too.
Sunday, October 18th, 2020
Thursday, September 24th, 2020
Performance and people
I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.
Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).
It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).
I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.
This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.
The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using
link rel=“preload” in the HTML) or adjusting the font loading (using
font-display in the CSS) or even nerdier stuff like subsetting.
In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that
@font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!
The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.
But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.
I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.
But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.
It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!
At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!
Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!
More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.
Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.
Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?
I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.
Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.
Monday, September 14th, 2020
A short web book on the past, present and future of interfaces, written in a snappy, chatty style.
From oral communication and storytelling 500,000 years ago to virtual reality today, the purpose of information interfaces has always been to communicate more quickly, more deeply, to foster relationships, to explore, to measure, to learn, to build knowledge, to entertain, and to create.
We interface precisely because we are human. Because we are intelligent, because we are social, because we are inquisitive and creative.
We design our interfaces and they in turn redefine what it means to be human.
Monday, June 1st, 2020
A great explanation of the curse of knowledge …with science!
(This, by the way, is the first of 100 blog posts that Matthias is writing in 100 days.)
Wednesday, May 13th, 2020
Some good writing advice in here:
- Spell out your acronyms.
- Use active voice, not passive voice.
- Fewer commas. More periods.
Tuesday, May 5th, 2020
I think Simon is onto something here. While the word “performance” means something amongst devs, it’s too vague to be useful when communicating with other disciplines. I like the idea of using the more descriptive “page speed” or “site speed” in those situations.
Web Performance and Web Performance Optimization are still valid and descriptive terms for our industry, but we might benefit from a change to our language when working with others. The language we use could be critical to the success of making the web a faster and more accessible place.
Monday, April 20th, 2020
I think a lot about Danielle’s talk at Patterns Day last year.
Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.
Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.
This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).
When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:
Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.
I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.
Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:
There are two things common between the websites in these screenshots that I took yesterday.
- They are beautifully designed, with great typography, clear branding, all optimized for readability.
- I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.
The web can be beautiful. Except it’s not right now.
How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?
I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.
It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:
Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.
Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.
So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.
I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.
Because the web can’t survive like this.
Monday, March 23rd, 2020
- Design systems haven’t “solved” inconsistency. Rather, they’ve shifted how and when it manifests.
- Many design systems have introduced another, deeper issue: a problem of visibility.
Ethan makes the case that it’s time we stopped taking a pattern-led approach to design systems and start taking a process-led approach. I agree. I think there’s often more emphasis on the “design” than the “system”.
Tuesday, March 3rd, 2020
Telling the story of performance
At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.
The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.
It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.
There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.
When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.
Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.
You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.
Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.
Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.
This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.
Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.
SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.
If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.
The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.
Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right now—Crux.run balances that with the story of performance over time.
Measuring performance is important. Communicating the story of performance is equally important.
Monday, February 3rd, 2020
A global communications network now exists that’s cheap enough or in some cases even free to access, offering a pseudonymous way for people to feel safe enough to share a private experience with complete strangers? I give Facebook and Twitter a bunch of shit for their rhetoric about a global community (no, Facebook’s billions of users are no more a community than the television-watching global community) and creating authentic connection, but I will very happily admit that this, this particular example with people sharing what it is like to be me and learning what is it like to be you is the good.
This is the thing that makes free, open, networked communication brilliant. This is the thing that brings down silos and creates common understanding and humanizes us all, that creates empathy and the first steps towards compassion.
That someone can read about this insight and have a way to react to it and share their perspective and not even know who else might read it, but feel safe in doing so and maybe even with the expectation that this sharing is a net good? That is good. That is what we should strive for.
Wednesday, January 8th, 2020
Writing solidifies, chat dissolves. Substantial decisions start and end with an exchange of complete thoughts, not one-line-at-a-time jousts. If it’s important, critical, or fundamental, write it up, don’t chat it down.
This one feels like it should be Somebody’s Law:
If your words can be perceived in different ways, they’ll be understood in the way which does the most harm.
Monday, December 9th, 2019
Brad gets ranty …with good reason.
Friday, November 22nd, 2019
This is an interesting comparison: design systems as APIs. It makes sense. A design system—like an API—is a contract. Also, an API without documentation is of little use …much like a design system without documentation.
Thursday, November 21st, 2019
Frank is redesigning in the open. Watch this space:
By writing about it, it may help both of us. I can further develop my methods by navigating the friction of explaining them. I’ve been looking for a way to clarify and share my thoughts about typography and layout on screens, and this seems like a good chance to do so. And you? Well, perhaps the site can offer a clearly explained way of working that’s worth considering. That seems to be a rare thing on the web these days.
Monday, November 11th, 2019
I have such fondness for this film. It’s one of those films that I love to watch on a Sunday afternoon (though that’s true of so many Spielberg films—Jaws, Raiders Of The Lost Ark, E.T.). I remember seeing it in the cinema—this would’ve been the special edition re-release—and feeling the seat under me quake with the rumbling of the musical exchange during the film’s climax.
Ariel invited Rose Eveleth and Laura Welcher on to discuss the film. They spent a lot of time discussing the depiction of first contact communication—Arrival being the other landmark film on this topic.
If we send a message into space, will extraterrestrial beings receive it? Will they understand?
You can a read an article by the author on The Guardian, where he mentions some of the wilder ideas about transmitting signals to aliens:
Minsky, widely regarded as the father of AI, suggested it would be best to send a cat as our extraterrestrial delegate.
Don’t worry. Marvin Minsky wasn’t talking about sending a real live cat. Rather, we transmit instructions for building a computer and then we can transmit information as software. Software about, say, cats.
It’s not that far removed from what happened with the Voyager golden record, although that relied on analogue technology—the phonograph—and sent the message pre-compiled on hardware; a much slower transmission rate than radio.
But it’s interesting to me that Minsky specifically mentioned cats. There’s another long-term communication puzzle that has a cat connection.
The Yukka Mountain nuclear waste repository is supposed to store nuclear waste for 10,000 years. How do we warn our descendants to stay away? We can’t use language. We probably can’t even use symbols; they’re too culturally specific. A think tank called the Human Interference Task Force was convened to agree on the message to be conveyed:
This place is a message… and part of a system of messages… pay attention to it! Sending this message was important to us. We considered ourselves to be a powerful culture.
This place is not a place of honor…no highly esteemed deed is commemorated here… nothing valued is here.
What is here is dangerous and repulsive to us. This message is a warning about danger.
A series of thorn-like threatening earthworks was deemed the most feasible solution. But there was another proposal that took a two pronged approach with genetics and folklore:
- Breed cats that change colour in the presence of radioactive material.
- Teach children nursery rhymes about staying away from cats that change colour.
This is the raycat solution.
Tuesday, October 29th, 2019
If we want design to communicate, we need to communicate in the design process.
I might get that framed.
Wednesday, July 24th, 2019
Jon’s ranting about Agile here, but it could equally apply to design systems:
Agile and design is like looking at a picture through a keyhole. By slicing big things into smaller things, designers must work incrementally. Its this incrementalism that can lead to what I call the ‘Frankensteining’ of a digital product or service.
Wednesday, July 3rd, 2019
Chris describes exactly why I wrote about
But we should be extra watchful about stuff like this. If any browser goes rogue and just starts shipping stuff, web standards is over. Life for devs gets a lot harder and the web gets a lot worse. The stakes are high. And it’s not going to happen overnight, it’s going to happen with little tiny things like this. Keep that blue beanie on.
Wednesday, June 19th, 2019
Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called
toast. It turns out that’s not the case, but the confusion is understandable.
First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.
Secondly, Chrome isn’t going to ship a
toast element. Instead, this is a proposal for a custom element currently called
std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called
toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.
This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of
std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.
But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:
It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.
I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.
To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.
But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.
So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:
It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.
Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.
Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:
Why not release
std-toast(and other elements in development) as libraries first?
The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.
Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.
I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.
Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.
But that only works if there is some kind of standards process. Layered APIs in general—and
std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.
Like Dave Cramer says:
I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.
Wednesday, May 15th, 2019
The slides from Carolyn’s talk at Beyond Tellerrand. The presentation is ostensibly about writing documentation, but I think it’s packed with good advice for writing in general.