Journal

2834 sparkline

Sunday, December 5th, 2021

Getting back

The three-part almost nine-hour long documentary Get Back is quite fascinating.

First of all, the fact that all this footage exists is remarkable. It’s as if Disney had announced that they’d found the footage for a film shot between Star Wars and The Empire Strikes Back.

Still, does this treasure trove really warrant the daunting length of this new Beatles documentary? As Terence puts it:

There are two problems with this Peter Jackson documentary. The first is that it is far too long - are casual fans really going to sit through 9 hours of a band bickering? The second problem is that it is far too short! Beatles obsessives (like me) could happily drink in a hundred hours of this stuff.

In some ways, watching Get Back is liking watching one of those Andy Warhol art projects where he just pointed a camera at someone for 24 hours. It’s simultaneously boring and yet oddly mesmerising.

What struck myself and Jessica watching Get Back was how much it was like our experience of playing with Salter Cane. I’m not saying Salter Cane are like The Beatles. I’m saying that The Beatles are like Salter Cane and every other band on the planet when it comes to how the sausage gets made. The same kind of highs. The same kind of lows. And above all, the same kind of tedium. Spending hours and hours in a practice room or a recording studio is simultaneously exciting and dull. This documentary captures that perfectly.

I suppose Peter Jackson could’ve made a three-part fly-on-the-wall documentary series about any band and I would’ve found it equally interesting. But this is The Beatles and that means there’s a whole mythology that comes along for the ride. So, yes, it’s like watching paint dry, but on the other hand, it’s paint painted by The Beatles.

What I liked about Get Back is that it demystified the band. The revelation for me was really understanding that this was just four lads from Liverpool making music together. And I know I shouldn’t be surprised by that—the Beatles themselves spent years insisting they were just four lads from Liverpool making music together, but, y’know …it’s The Beatles!

There’s a scene in the Danny Boyle film Yesterday where the main character plays Let It Be for the first time in a world where The Beatles have never existed. It’s one of the few funny parts of the film. It’s funny because to everyone else it’s just some new song but we, the audience, know that it’s not just some new song…

Christ, this is Let It Be! You’re the first people on Earth to hear this song! This is like watching Da Vinci paint the Mona Lisa right in front of your bloody eyes!

But truth is even more amusing than fiction. In the first episode of Get Back, we get to see when Paul starts noodling on the piano playing Let It Be for the first time. It’s a momentous occasion and the reaction from everyone around him is …complete indifference. People are chatting, discussing a set design that will never get built, and generally ignoring the nascent song being played. I laughed out loud.

There’s another moment when George brings in the song he wrote the night before, I Me Mine. He plays it while John and Yoko waltz around. It’s in 3/4 time and it’s minor key. I turned to Jessica and said “That’s the most Salter Cane sounding one.” Then, I swear at that moment, after George has stopped playing that song, he plays a brief little riff on the guitar that sounded exactly like a Salter Cane song we’re working on right now. Myself and Jessica turned to each other and said, “What was that‽”

Funnily enough, when we told this to Chris, the singer in Salter Cane, he mentioned how that was the scene that had stood out to him as well, but not for that riff (he hadn’t noticed the similarity). For him, it was about how George had brought just a scrap of a song. Chris realised it was the kind of scrap that he would come up with, but then discard, thinking there’s not enough there. So maybe there’s a lesson here about sharing those scraps.

Watching Get Back, I was trying to figure out if it was so fascinating to me and Jessica (and Chris) because we’re in a band. Would it resonate with other people?

The answer, it turns out, is yes, very much so. Everyone’s been sharing that clip of Paul coming up with the beginnings of the song Get Back. The general reaction is one of breathless wonder. But as Chris said, “How did you think songs happened?” His reaction was more like “yup, accurate.”

Inevitably, there are people mining the documentary for lessons in creativity, design, and leadership. There are already Medium think-pieces and newsletters analysing the processes on display. I guarantee you that there will be multiple conference talks at UX events over the next few years that will include footage from Get Back.

I understand how you could watch this documentary and take away the lesson that these were musical geniuses forging remarkable works of cultural importance. But that’s not what I took from it. I came away from it thinking they’re just a band who wrote and recorded some songs. Weirdly, that made me appreciate The Beatles even more. And it made me appreciate all the other bands and all the other songs out there.

Wednesday, November 24th, 2021

Faulty logic

I’m a fan of logical properties in CSS. As I wrote in the responsive design course on web.dev, they’re crucial for internationalisation.

Alaa Abd El-Rahim has written articles on CSS tricks about building multi-directional layouts and controlling layout in a multi-directional website. Not having to write separate stylesheets—or even separate rules—for different writing modes is great!

More than that though, I think understanding logical properties is the best way to truly understand CSS layout tools like grid and flexbox.

It’s like when you’re learning a new language. At some point your brain goes from translating from your mother tongue into the other language, and instead starts thinking in that other language. Likewise with CSS, as some point you want to stop translating “left” and “right” into “inline-start” and “inline-end” and instead start thinking in terms of inline and block dimensions.

As is so often the case with CSS, I think new features like these are easier to pick up if you’re new to the language. I had to unlearn using floats for layout and instead learn flexbox and grid. Someone learning layout from scatch can go straight to flexbox and grid without having to ditch the cognitive baggage of floats. Similarly, it’s going to take time for me to shed the baggage of directional properties and truly grok logical properties, but someone new to CSS can go straight to logical properties without passing through the directional stage.

Except we’re not quite there yet.

In order for logical properties to replace directional properties, they need to be implemented everywhere. Right now you can’t use logical properties inside a media query, for example:

@media (min-inline-size: 40em)

That wont’ work. You have to use the old-fashioned syntax:

@media (min-width: 40em)

Now you could rightly argue that in this instance we’re talking about the physical dimensions of the viewport. So maybe width and height make more sense than inline and block.

But then take a look at how the syntax for container queries is going to work. First you declare the axis that you want to be contained using the syntax from logical properties:

main {
  container-type: inline-size;
}

But then when you go to declare the actual container query, you have to use the corresponding directional property:

@container (min-width: 40em)

This won’t work:

@container (min-inline-size: 40em)

I kind of get why it won’t work: the syntax for container queries should match the syntax for media queries. But now the theory behind disallowing logical properties in media queries doesn’t hold up. When it comes to container queries, the physical layout of the viewport isn’t what matters.

I hope that both media queries and container queries will allow logical properties sooner rather than later. Until they fall in line, it’s impossible to make the jump fully to logical properties.

There are some other spots where logical properties haven’t been fully implemented yet, but I’m assuming that’s a matter of time. For example, in Firefox I can make a wide data table responsive by making its container side-swipeable on narrow screens:

.table-container {
  max-inline-size: 100%;
  overflow-inline: auto;
}

But overflow-inline and overflow-block aren’t supported in any other browsers. So I have to do this:

.table-container {
  max-inline-size: 100%;
  overflow-x: auto;
}

Frankly, mixing and matching logical properties with directional properties feels worse than not using logical properties at all. The inconsistency is icky. This feels old-fashioned but consistent:

.table-container {
  max-width: 100%;
  overflow-x: auto;
}

I don’t think there are any particular technical reasons why browsers haven’t implemented logical properties consistently. I suspect it’s more a matter of priorities. Fully implementing logical properties in a browser may seem like a nice-to-have bit of syntactic sugar while there are other more important web standard fish to fry.

But from the perspective of someone trying to use logical properties, the patchy rollout is frustrating.

Wednesday, November 17th, 2021

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Tuesday, November 16th, 2021

Tracking

I’ve been reading the excellent Design For Safety by Eva PenzeyMoog. There was a line that really stood out to me:

The idea that it’s alright to do whatever unethical thing is currently the industry norm is widespread in tech, and dangerous.

It stood out to me because I had been thinking about certain practices that are widespread, accepted, and yet strike me as deeply problematic. These practices involve tracking users.

The first problem is that even the terminology I’m using would be rejected. When you track users on your website, it’s called analytics. Or maybe it’s stats. If you track users on a large enough scale, I guess you get to just call it data.

Those words—“analytics”, “stats”, and “data”—are often used when the more accurate word would be “tracking.”

Or to put it another way; analytics, stats, data, numbers …these are all outputs. But what produced these outputs? Tracking.

Here’s a concrete example: email newsletters.

Do you have numbers on how many people opened a particular newsletter? Do you have numbers on how many people clicked a particular link?

You can call it data, or stats, or analytics, but make no mistake, that’s tracking.

Follow-on question: do you honestly think that everyone who opens a newsletter or clicks on a link in a newsletter has given their informed constent to be tracked by you?

You may well answer that this is a widespread—nay, universal—practice. Well yes, but a) that’s not what I asked, and b) see the above quote from Design For Safety.

You could quite correctly point out that this tracking is out of your hands. Your newsletter provider—probably Mailchimp—does this by default. So if the tracking is happening anyway, why not take a look at those numbers?

But that’s like saying it’s okay to eat battery-farmed chicken as long as you’re not breeding the chickens yourself.

When I try to argue against this kind of tracking from an ethical standpoint, I get a frosty reception. I might have better luck battling numbers with numbers. Increasing numbers of users are taking steps to prevent tracking. I had a plug-in installed in my mail client—Apple Mail—to prevent tracking. Now I don’t even need the plug-in. Apple have built it into the app. That should tell you something. It reminds me of when browsers had to introduce pop-up blocking.

If the outputs generated by tracking turn out to be inaccurate, then shouldn’t they lose their status?

But that line of reasoning shouldn’t even by necessary. We shouldn’t stop tracking users because it’s inaccurate. We should stop stop tracking users because it’s wrong.

Monday, November 15th, 2021

4 + 3

I work a four-day week now.

It started with the first lockdown. Actually, for a while there, I was working just two days a week while we took a “wait and see” attitude at Clearleft to see how The Situation was going to affect work. We weathered that storm, but rather than going back to a full five-day week I decided to try switching to four days instead.

This meant taking a pay cut. Time is literally money when it comes to work. But I decided it was worth it. That’s a privileged position to be in, I know. I managed to pay off the mortgage on our home last year so that reduced some financial pressure. But I also turned fifty, which made me think that I should really be padding some kind of theoretical nest egg. Still, the opportunity to reduce working hours looked good to me.

The ideal situation would be to have everyone switch to a four-day week without any reduction in pay. Some companies have done that but it wasn’t an option for Clearleft, alas.

I’m not the only one working a four-day week at Clearleft. A few people were doing it even before The Situation. We all take Friday as our non-work day, which makes for a nice long three-day weekend.

What’s really nice is that Friday has been declared a “no meeting” day for everyone at Clearleft. That means that those of us working a four-day week know we’re not missing out on anything and it’s pretty nice for people working a five-day week to have a day free of appointments. We have our end-of-week all-hands wind-down on Thursday afternoons.

I haven’t experienced any reduction in productivity. Quite the opposite. There may be a corollay to Parkinson’s Law: work contracts to fill the time available.

At one time, a six-day work week was the norm. It may well be that a four-day work week becomes the default over time. That could dovetail nicely with increasing automation.

I’ve got to say, I’m really, really liking this. It’s quite nice that when Wednesday rolls around, I can say “it’s almost the weekend!”

A three-day weekend feels normal to me now. I could imagine tilting the balance even more over time. Maybe every few years I could reduce the working by a day or half a day. So instead of going from a full-on five-day working week straight into retirement, it would be more of a gentle ratcheting down over the years.

Monday, November 8th, 2021

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Ten episodes of the Web History podcast

For over a year now I’ve been recording the audio versions of Jay Hoffman’s excellent Web History series on CSS Tricks.

We’re up to ten chapters now. The audio version of each chapter is between 30 and 40 minutes long. That’s around 400 minutes of my voice: a good six and a half hours of me narrating the history of the web. That’s like an audio book!

The story so far covers:

  1. Birth
  2. Browsers
  3. The Website
  4. Search
  5. Publishing
  6. Web Design
  7. Standards
  8. CSS
  9. Community
  10. Browser Wars

…with more to come.

The audio is available as a podcast. You can subscribe to the RSS feed. It’s also available from Apple and Spotify.

That’ll give you plenty to listen to while you wait for the next season of the Clearleft podcast.

Friday, November 5th, 2021

Memories of Ajax

I just finished watching The Billion Dollar Code, a German language miniseries on Netflix. It’s no Halt and Catch Fire, but it combines ’90s nostalgia, early web tech, and an opportunity for me to exercise my German comprehension.

It’s based on a true story, but with amalgamated characters. The plot, which centres around the preparation for a court case, inevitably invites comparison to The Social Network, although this time the viewpoint is from that of the underdogs trying to take on the incumbent. The incumbent is Google. The underdogs are ART+COM, artist-hackers who created the technology later used by Google Earth.

Early on, one of the characters says something about creating a one-to-one model of the whole world. That phrase struck me as familiar…

I remember being at the inaugural Future Of Web Apps conference in London back in 2006. Discussing the talks with friends afterwards, we all got a kick out of the speaker from Google, who happened to be German. His content and delivery was like a wonderfully Stranglovesque mad scientist. I’m sure I remember him saying something like “vee made a vun-to-vun model of the vurld.”

His name was Steffen Meschkat. I liveblogged the talk at the time. Turns out he was indeed on the team at ART+COM when they created Terravision, the technology later appropriated by Google (he ended up working at Google, which doesn’t make for as exciting a story as the TV show).

His 2006 talk was all about Ajax, something he was very familiar with, being on the Google Maps team. The Internet Archive even has a copy of the original audio!

It’s easy to forget now just how much hype there was around Ajax back then. It prompted me to write a book about combining Ajax and progressive enhancement.

These days, no one talks about Ajax. But that’s not because the technology went away. Quite the opposite. The technology became so ubiquituous that it no longer even needs a label.

A web developer today might ask “what’s Ajax?” in the same way that a fish might ask “what’s water?”

Thursday, November 4th, 2021

Writing on web.dev

Chrome Dev Summit kicked off yesterday. The opening keynote had its usual share of announcements.

There was quite a bit of talk about privacy, which sounds good in theory, but then we were told that Google would be partnering with “industry stakeholders.” That’s probably code for the kind of ad-tech sharks that have been making a concerted effort to infest W3C groups. Beware.

But once Una was on-screen, the topics shifted to the kind of design and development updates that don’t have sinister overtones.

My favourite moment was when Una said:

We’re also partnering with Jeremy Keith of Clearleft to launch Learn Responsive Design on web.dev. This is a free online course with everything you need to know about designing for the new responsive web of today.

This is what’s been keeping me busy for the past few months (and for the next month or so too). I’ve been writing fifteen pieces—or “modules”—on modern responsive web design. One third of them are available now at web.dev/learn/design:

  1. Introduction
  2. Media queries
  3. Internationalization
  4. Macro layouts
  5. Micro layouts

The rest are on their way: typography, responsive images, theming, UI patterns, and more.

I’ve been enjoying this process. It’s hard work that requires me to dive deep into the nitty-gritty details of lots of different techniques and technologies, but that can be quite rewarding. As is often said, if you truly want to understand something, teach it.

Oh, and I made one more appearance at the Chrome Dev Summit. During the “Ask Me Anything” section, quizmaster Una asked the panelists a question from me:

Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?

(Thanks to Jake for helping craft the question into a form that could make it past the legal department but still retain its spiciness.)

The question got a response. I wouldn’t say it got an answer. My verdict remains:

I’m not sure that Google Chrome can be considered a user agent.

The fundamental issue is that you’ve got a single company that’s the market leader in web search, the market leader in web advertising, and the market leader in web browsers. I honestly believe all three would function better—and more honestly—if they were separate entities.

Monopolies aren’t just damaging for customers. They’re damaging for the monopoly too. I’d love to see Google Chrome compete on being a great web browser without having to also balance the needs of surveillance-based advertising.

Wednesday, November 3rd, 2021

Publishing The State Of The Web

Back in April I gave a talk at An Event Apart Spring Summit. The talk was called The State Of The Web, and I’ve published the transcript. I’ve also published the video.

I put a lot of work into this talk and I think it paid off. I wrote about preparing the talk. I also wrote about recording it. I also published links related to the talk. It was an intense process, but a rewarding one.

I almost called the talk The Overview Effect. My main goal with the talk was to instil a sense of perspective. Hence the references to the famous Earthrise photograph.

On the one hand, I wanted the audience to grasp just how far the web has come. So the first half of the talk is a bit of a trip down memory lane, with a constant return to just how much we can accomplish in browsers today. It’s all very positive and upbeat.

Then I twist the knife. Having shown just how far we’ve progressed technically, I switch gears the moment I say:

The biggest challenges facing the World Wide Web today are not technical challenges.

Then I dive into those challenges, as I see them. It turns out that technical challenges would be preferable to the seemingly intractable problems of today’s web.

I almost wish I could’ve flipped the order: talk about the negative stuff first but then finish with the positive. I worry that the talk could be a bit of a downer. Still, I tried to finish on an optimistic note.

I won’t spoil it any more for you. Watch the video or have a read of The State Of The Web.