Adoption. — Ethan Marcotte
Ethan highlights a classic case of the McNamara Fallacy—measuring adoption of design system components.
Ethan highlights a classic case of the McNamara Fallacy—measuring adoption of design system components.
This describes how I iterate on The Session:
It comes down to this annoying, upsetting, stupid fact: the only way to build a great product is to use it every day, to stare at it, to hold it in your hands to feel its lumps. The data and customers will lie to you but the product never will.
This whole post reminded of the episode of the Clearleft podcast on measuring design.
The problem underlying all this is that when it comes to building a product, all data is garbage, a lie, or measuring the wrong thing. Folks will be obsessed with clicks and charts and NPS scores—the NFTs of product management—and in this sea of noise they believe they can see the product clearly. There are courses and books and talks all about measuring happiness and growth—surveys! surveys! surveys!—with everyone in the field believing that they’ve built a science when they’ve really built a cult.
I like this approach to offering a design system. It seems less prescriptive than many:
Designed not as a rule set, but rather a toolbox, the Data Design Language includes a chart library, design guidelines, colour and typographic style specifications with usability guidance for internationalization (i18n) and accessibility (a11y), all reflecting our data design principles.
Here’s a really excellent, clearly-written case study that unfortunately includes this accurate observation:
In recent years the practice of information architecture has fallen out of fashion, which is a shame as you can’t design something successfully without it. If a user can’t find a feature, it’s game over - the feature may as well not exist as far as they’re concerned.
I also like this insight:
Burger menus are effective… at hiding things.
A handy little script from Aaron to improve the form validation experience.
My talk, Building, was about the metaphors we use to talk about the work we do on the web. So I’m interested in this analysis of the metaphors used to talk about markup:
- Data is documents, processing data is clerking
- Data is trees, processing data is forestry
- Data is buildings, processing data is construction
- Data is a place, processing data is a journey
- Data is a fluid, processing data is plumbing
- Data is a textile, processing data is weaving
- Data is music, processing data is performing
The design process in action in Victorian England:
Recognizing that few people actually read statistical tables, Nightingale and her team designed graphics to attract attention and engage readers in ways that other media could not. Their diagram designs evolved over two batches of publications, giving them opportunities to react to the efforts of other parties also jockeying for influence. These competitors buried stuffy graphic analysis inside thick books. In contrast, Nightingale packaged her charts in attractive slim folios, integrating diagrams with witty prose. Her charts were accessible and punchy. Instead of building complex arguments that required heavy work from the audience, she focused her narrative lens on specific claims. It was more than data visualization—it was data storytelling.
This is a story about pizza and geometry.
The interactive widget here really demonstrates the difference between showing and telling.
In two of my recent talks—In And Out Of Style and Design Principles For The Web—I finish by looking at three different components:
In each case you could use native HTML elements:
button
,select
, andinput type="date"
.Or you could use div
s with a whole bunch of JavaScript and ARIA.
In the case of a datepicker, I totally understand why you’d go for writing your own JavaScript and ARIA. The native HTML element is quite restricted, especially when it comes to styling.
In the case of a dropdown, it’s less clear-cut. Personally, I’d use a select
element. While it’s currently impossible to style the open state of a select
element, you can style the closed state with relative ease. That’s good enough for me.
Still, I can understand why that wouldn’t be good enough for some cases. If pixel-perfect consistency across platforms is a priority, then you’re going to have to break out the JavaScript and ARIA.
Personally, I think chasing pixel-perfect consistency across platforms isn’t even desirable, but I get it. I too would like to have more control over styling select
elements. That’s one of the reasons why the work being done by the Open UI group is so important.
But there’s one more component: a button.
Again, you could use the native button
element, or you could use a div
or a span
and add your own JavaScript and ARIA.
Now, in this case, I must admit that I just don’t get it. Why wouldn’t you just use the native button
element? It has no styling issues and the browser gives you all the interactivity and accessibility out of the box.
I’ve been trying to understand the mindset of a developer who wouldn’t use a native button
element. The easy answer would be that they’re just bad people, and dismiss them. But that would probably be lazy and inaccurate. Nobody sets out to make a website with poor performance or poor accessibility. And yet, by choosing not to use the native HTML element, that’s what’s likely to happen.
I think I might have finally figured out what might be going on in the mind of such a developer. I think the issue is one of control.
When I hear that there’s a native HTML element—like button
or select
—that comes with built-in behaviours around interaction and accessibility, I think “Great! That’s less work for me. I can just let the browser deal with it.” In other words, I relinquish control to the browser (though not entirely—I still want the styling to be under my control as much as possible).
But I now understand that someone else might hear that there’s a native HTML element—like button
or select
—that comes with built-in behaviours around interaction and accessibility, and think “Uh-oh! What if there unexpected side-effects of these built-in behaviours that might bite me on the ass?” In other words, they don’t trust the browsers enough to relinquish control.
I get it. I don’t agree. But I get it.
If your background is in computer science, then the ability to precisely predict how a programme will behave is a virtue. Any potential side-effects that aren’t within your control are undesirable. The only way to ensure that an interface will behave exactly as you want is to write it entirely from scratch, even if that means using more JavaScript and ARIA than is necessary.
But I don’t think it’s a great mindset for the web. The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.
Still, I’m glad that I now have a bit more insight into why someone would choose to attempt to retain control by using div
, JavaScript and ARIA. It’s not what I would do, but I think I understand the motivation a bit better now.
The result of adding more constraints means that the products have a broader appeal due to their simple interface. It reminds me of a Jeremy Keith talk I heard last month about programming languages like CSS which have a simple interface pattern:
selector { property: value }
. Simple enough anyone can learn. But simple doesn’t mean it’s simplistic, which gives me a lot to think about.
City of Women encourages Londoners to take a second glance at places we might once have taken for granted by reimagining the iconic Underground map.
I love everything about this …except that there’s no Rosalind Franklin station.
Following on from my recently-lost long bet, this is a timely bit of data spelunking from Brian analysing the linkrot of 1400 links over 18 years of time.
Goodreads lost my entire account last week. Nine years as a user, some 600 books and 250 carefully written reviews all deleted and unrecoverable. Their support has not been helpful. In 35 years of being online I’ve never encountered a company with such callous disregard for their users’ data.
Ouch! Lesson learned:
My plan now is to host my own blog-like collection of all my reading notes like Tom does.
A fascinating four-part series by Lisa Charlotte Muth on colour in data visualisations:
This is a great combination of rigorous research and great data visualisation.
Smart advice on future-proofing and backward-compatibility:
There isn’t a single, specific device, browser, and person we cater to when creating a web experience. Websites and web apps need to adapt to a near-infinite combination of these circumstances to be effective. This adaptability is a large part of what makes the web such a successful medium.
Consider doing the hard work to make it easy and never remove feature queries and @supports statements. This creates a robust approach that can gracefully adapt to the past, as well as the future.
This is something I bump against over and over again: so-called evergreen browsers that can’t actually be updated because of operating system limits.
From what I could gather, the version of Chrome was tied to ChromeOS which couldn’t be updated because of the hardware. No new ChromeOS meant no new Chrome which meant stuck at version 76.
But what about the iPad? I discovered that my Mom’s iPad was a 1st generation iPad Air. Apple stopped supporting that device in iOS 12, which means it was stuck with whatever version of Safari last shipped with iOS 12.
So I had two older browsers that couldn’t be updated. It was device obsolescence because you couldn’t install the latest browser.
Websites stop working and the only solution is to buy a whole new device.
We’ve got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.
That’d be fine if all this shit meant something and we knew how to interpret it. But it doesn’t and we don’t.
The reality is much simpler, and therefore much more complex. Most of us don’t understand how data is collected, how these mechanisms work and most importantly where and how they don’t work.
Prompted by my talk, The State Of The Web, Brian zooms out to get some perspective on how browser power is consolidated.
The web is made of clients and servers. There’s a huge amount of diversity in the server space but there’s very little diversity when it comes to clients because making a browser has become so complex and expensive.
But Brian hopes that this complexity and expense could be distributed amongst a large amount of smaller players.
10 companies agreeing to invest $10k apiece to advance and maintain some area of shared interest is every bit as useful as 1 agreeing to invest $100k generally. In fact, maybe it’s more representative.
We believe that there is a very long tail of increasingly smaller companies who could do something, if only they coordinated to fund it together. The further we stretch this out, the more sources we enable, the more its potential adds up.
Even if you can somehow justify using tracking technologies (which don’t work reliably) to make general, statistical decisions (“fewer people open our emails when the subject contains the word ‘overdraft’!”), you can’t make individual decisions based on them. That’s just wrong.