Journal tags: work

158

sparkline

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

The complete line-up for UX London

The line-up for UX London is now complete!

Two thematically-linked talks have been added to day one. Emma Parnell will be talking about the work she did with NHS Digital on the booking service for Covid-19 vaccinations. Videha Sharma—an NHS surgeon!—will be talking about co-designing and prototyping in healthcare.

There’s a bunch of new additions to day three. Amir Ansari will be talking about design systems in an enterprise setting and there’ll be two different workshops on design systems from John Bevan and Julia Belling.

But don’t worry; if design systems aren’t your jam, you’ve got options. Also on day three, Alastair Somerville will be getting tactile in his workshop on sensory UX. And Trenton Moss will be sharing his mind-control tricks in his workshop, “How to sell in your work to anyone.”

You can peruse the full schedule at your leisure. But don’t wait too long before getting your tickets. Standard pricing ends in ten days on Friday, June 3rd.

And don’t forget, you get quite a discount when you buy five or more tickets at a time so bring the whole team. UX London should be your off-site.

UX London should be your off-site

Check out the line up for this year’s UX London. I know I’m biased, but damn! That’s objectively an excellent roster of smart, interesting people.

When I was first putting that page together I had the name of each speaker followed by their job title and company. But when I stopped and thought about it—not to be too blunt—I realised “who cares?”. What matters is what they’ll be talking about.

And, wow, what they’ll be talking about sounds great! Designing for your international audiences, designing with the autistic community, how to win stakeholders and influence processes, the importance of clear writing in product development, designing good services, design systems for humans, and more. Not to mention workshops like designing your own research methods for a very diverse audience, writing for people who hate writing, and harnessing design systems.

You can peruse the schedule—which is almost complete now—to get a feel for how each day will flow.

But I’m not just excited about this year’s UX London because of the great talks and workshops. I’m also really, really excited at the prospect of gathering together—in person!—over the course of three days with my peers. That means meeting new and interesting people, but frankly, it’s going to be just as wonderful to hang out with my co-workers.

Clearleft has been a remote-only company for the past two years. We’ve still got our studio and people can go there if they like (but no pressure). It’s all gone better than I thought it would given how much of an in-person culture we had before the pandemic hit. But it does mean that it’s rare for us all to be together in the same place (if you don’t count Zoom as a place).

UX London is going to be like our off-site. Everyone from Clearleft is going to be there, regardless of whether “UX” or “design” appears in their job title. I know that the talks will resonate regardless. When I was putting the line-up together I made sure that all the talks would have general appeal, regardless of whether you were a researcher, a content designer, a product designer, a product manager, or anything else.

I’m guessing that the last two years have been, shall we say, interesting at your workplace too. And even if you’ve also been adapting well to remote work, I think you’ll agree that the value of having off-site gatherings has increased tenfold.

So do what we’re doing. Make UX London your off-site gathering. It’ll be a terrific three-day gathering in the sunshine in London from Tuesday, June 28th to Thursday, June 30th at the bright and airy Tobacco Dock.

If you need to convince your boss, I’ve supplied a list of reasons to attend. But you should get your tickets soon—standard pricing ends in just over two weeks on Friday, June 3rd. After that there’ll only be last-chance tickets available.

Even more UX London speaker updates

I’ve added five more faces to the UX London line-up.

Irina Rusakova will be giving a talk on day one, the day that focuses on research. Her talk on designing with the autistic community is one I’m really looking forward to.

Also on day one, my friend and former Clearleftie Cennydd Bowles will be giving a workshop called “What could go wrong?” He literally wrote the book on ethical design.

Day two is all about creation. My co-worker Chris How will be speaking. “Nepotism!” you cry! But no, Chris is speaking because I had the chance to his talk—called “Unexpectedly obvious”—and I thought “that’s perfect for UX London!”:

Let him take you on a journey through time and across the globe sharing stories of designs that solve problems in elegant if unusual ways.

Also on day two, you’ve got two additional workshops. Lou Downe will be running a workshop on designing good services, and Giles Turnbull will be running a workshop called “Writing for people who hate writing.”

I love that title! Usually when I contact speakers I don’t necessarily have a specific talk or workshop in mind, but I knew that I wanted that particular workshop from Giles.

When I wrote to Giles to ask come and speak, I began by telling how much I enjoy his blog—I’m a long-time suscriber to his RSS feed. He responded and said that he also reads my blog—we’re blog buddies! (That’s a terrible term but there should be a word for people who “know” each other only through reading each other’s websites.)

Anyway, that’s another little treasure trove of speakers added to the UX London roster:

That’s nineteen speakers already and we’re not done yet—expect further speaker announcements soon. But don’t wait on those announcements before getting your ticket. Get yours now!

Suspicion

I’ve already had some thoughtful responses to yesterday’s post about trust. I wrapped up my thoughts with a request:

I would love it if someone could explain why they avoid native browser features but use third-party code.

Chris obliged:

I can’t speak for the industry, but I have a guess. Third-party code (like the referenced Bootstrap and React) have a history of smoothing over significant cross-browser issues and providing better-than-browser ergonomic APIs. jQuery was created to smooth over cross-browser JavaScript problems. That’s trust.

Very true! jQuery is the canonical example of a library smoothing over the bumpy landscape of browser compatibilities. But jQuery is also the canonical example of a library we no longer need because the browsers have caught up …and those browsers support standards directly influenced by jQuery. That’s a library success story!

Charles Harries takes on my question in his post Libraries over browser features:

I think this perspective of trust has been hammered into developers over the past maybe like 5 years of JavaScript development based almost exclusively on inequality of browser feature support. Things are looking good in 2022; but as recently as 2019, 4 of the 5 top web developer needs had to do with browser compatibility.

Browser compatibility is one of the underlying promises that libraries—especially the big ones that Jeremy references, like React and Bootstrap—make to developers.

So again, it’s browser incompatibilities that made libraries attractive.

Jim Nielsen responds with the same message in his post Trusting Browsers:

We distrust the browser because we’ve been trained to. Years of fighting browser deficiencies where libraries filled the gaps. Browser enemy; library friend.

For example: jQuery did wonders to normalize working across browsers. Write code once, run it in any browser — confidently.

Three for three. My question has been answered: people gravitated towards libraries because browsers had inconsistent implementations.

I’m deliberately using the past tense there. I think Jim is onto something when he says that we’ve been trained not to trust browsers to have parity when it comes to supporting standards. But that has changed.

Charles again:

This approach isn’t a sustainable practice, and I’m trying to do as little of it as I can. Jeremy is right to be suspicious of third-party code. Cross-browser compatibility has gotten a lot better, and campaigns like Interop 2022 are doing a lot to reduce the burden. It’s getting better, but the exasperated I-just-want-it-to-work mindset is tough to uninstall.

I agree. Inertia is a powerful force. No matter how good cross-browser compatibility gets, it’s going to take a long time for developers to shed their suspicion.

Jim is glass-half-full kind of guy:

I’m optimistic that trust in browser-native features and APIs is being restored.

He also points to a very sensible mindset when it comes to third-party libraries and frameworks:

In this sense, third-party code and abstractions can be wonderful polyfills for the web platform. The idea being that the default posture should be: leverage as much of the web platform as possible, then where there are gaps to creating great user experiences, fill them in with exploratory library or framework features (features which, conceivably, could one day become native in browsers).

Yes! A kind of progressive enhancement approach to using third-party code makes a lot of sense. I’ve always maintained that you should treat libraries and frameworks like cattle, not pets. Don’t get too attached. If the library is solving a genuine need, it will be replaced by stable web standards in browsers (again, see jQuery).

I think that third-party libraries and frameworks work best as polyfills. But the whole point of polyfills is that you only use them when the browsers don’t supply features natively (and you also go back and remove the polyfill later when browsers do support the feature). But that’s not how people are using libraries and frameworks today. Developers are reaching for them by default instead of treating them as a last resort.

I like Jim’s proposed design princple:

Where available, default to browser-native features over third party code, abstractions, or idioms.

(P.S. It’s kind of lovely to see this kind of thoughtful blog-to-blog conversation happening. Right at a time when Twitter is about to go down the tubes, this is a demonstration of an actual public square with more nuanced discussion. Make your own website and join the conversation!)

Trust

I’ve noticed a strange mindset amongst front-end/full-stack developers. At least it seems strange to me. But maybe I’m the one with the strange mindset and everyone else knows something I don’t.

It’s to do with trust and suspicion.

I’ve made no secret of the fact that I’m suspicious of third-party code and dependencies in general. Every dependency you add to a project is one more potential single point of failure. You have to trust that the strangers who wrote that code knew what they were doing. I’m still somewhat flabbergasted that developers regularly add dependencies—via npm or yarn or whatever—that then pull in even more dependencies, all while assuming good faith and competence on the part of every person involved.

It’s a touching expression of faith in your fellow humans, but I’m not keen on the idea of faith-based development.

I’m much more trusting of native browser features—HTML elements, CSS features, and JavaScript APIs. They’re not always perfect, but a lot of thought goes into their development. By the time they land in browsers, a whole lot of smart people have kicked the tyres and considered many different angles. As a bonus, I don’t need to install them. Even better, end users don’t need to install them.

And yet, the mindset I’ve noticed is that many developers are suspicious of browser features but trusting of third-party libraries.

When I write and talk about using service workers, I often come across scepticism from developers about writing the service worker code. “Is there a library I can use?” they ask. “Well, yes” I reply, “but then you’ve got to understand the library, and the time it takes you to do that could be spent understanding the native code.” So even though a library might not offer any new functionality—just a different idion—many developers are more likely to trust the third-party library than they are to trust the underlying code that the third-party library is abstracting!

Developers are more likely to trust, say, Bootstrap than they are to trust CSS grid or custom properties. Developers are more likely to trust React than they are to trust web components.

On the one hand, I get it. Bootstrap and React are very popular. That popularity speaks volumes. If lots of people use a technology, it must be a safe bet, right?

But if we’re talking about popularity, every single browser today ships with support for features like grid, custom properties, service workers and web components. No third-party framework can even come close to that install base.

And the fact that these technologies have shipped in stable browsers means they’re vetted. They’ve been through a rigourous testing phase. They’ve effectively got a seal of approval from each individual browser maker. To me, that seems like a much bigger signal of trustworthiness than the popularity of a third-party library or framework.

So I’m kind of confused by this prevalent mindset of trusting third-party code more than built-in browser features.

Is it because of the job market? When recruiters are looking for developers, their laundry list is usually third-party technologies: React, Vue, Bootstrap, etc. It’s rare to find a job ad that lists native browser technologies: flexbox, grid, service workers, web components.

I would love it if someone could explain why they avoid native browser features but use third-party code.

Until then, I shall remain perplexed.

More UX London speaker updates

It wasn’t that long ago that I told you about some of the speakers that have been added to the line-up for UX London in June: Steph Troeth, Heldiney Pereira, Lauren Pope, Laura Yarrow, and Inayaili León. Well, now I’ve got another five speakers to tell you about!

Aleks Melnikova will be giving a workshop on day one, June 28th—that’s the day with a focus on research.

Stephanie Marsh—who literally wrote the book on user research—will also be giving a workshop that day.

Before those workshops though, you’ll get to hear a talk from the one and only Kat Zhou, the creator of Design Ethically. By the way, you can hear Kat talking about deceptive design in a BBC radio documentary.

Day two has a focus on content design so who better to deliver a workshop than Sarah Winters, author of the Content Design book.

Finally, on day three—with its focus on design systems—I’m thrilled to announce that Adekunle Oduye will be giving a talk. He too is an author. He co-wrote the Design Engineering Handbook. I also had the pleasure of talking to Adekunle for an episode of the Clearleft podcast on design engineering.

So that’s another five excellent speakers added to the line-up:

That’s a total of fifteen speakers so far with more on the way. And I’ll be updating the site with more in-depth descriptions of the talks and workshops soon.

If you haven’t yet got your ticket for UX London, grab one now. You can buy tickets for individual days, or to get the full experience and the most value, get a ticket for all three days.

UX London speaker updates

If you’ve signed up to the UX London newsletter then this won’t be news to you, but more speakers have been added to the line-up.

Steph Troeth will be giving a workshop on day one. That’s the day with a strong focus on research, and when it comes to design research, Steph is unbeatable. You can hear some of her words of wisdom in an episode of the Clearleft podcast all about design research.

Heldiney Pereira will be speaking on day two. That’s the day with a focus on content design. Heldiney previously spoke at our Content By Design event and it was terrific—his perspective on content design as a product designer is invaluable.

Lauren Pope will also be on day two. She’ll be giving a workshop. She recently launched a really useful content audit toolkit and she’ll be bringing that expertise to her UX London workshop.

Day three is going to have a focus on design systems (and associated disciplines like design engineering and design ops). Both Laura Yarrow and Inayaili León will be giving talks on that day. You can expect some exciting war stories from the design system trenches of HM Land Registry and GitHub.

I’ve got some more speakers confirmed but I’m going to be a tease and make you wait a little longer for those names. But check out the line-up so far! This going to be such an excellent event (I know I’m biased, but really, look at that line-up!).

June 28th to 30th. Tobacco Dock, London. Get your ticket if you haven’t already.

Both plagues on your one house

February is a tough month at the best of times. A February during The Situation is particularly grim.

At least in December you get Christmas, whose vibes can even carry you through most of January. But by the time February rolls around, it’s all grim winteriness with no respite in sight.

In the middle of February, Jessica caught the ’rona. On the bright side, this wasn’t the worst timing: if this had happened in December, our Christmas travel plans to visit family would’ve been ruined. On the not-so-bright side, catching a novel coronavirus is no fun.

Still, the vaccines did their job. Jessica felt pretty crap for a couple of days but was on the road to recovery before too long.

Amazingly, I did not catch the ’rona. We slept in separate rooms, but still, we were spending most of our days together in the same small flat. Given the virulence of The Omicron Variant, I’m counting my blessings.

But just in case I got any ideas about having some kind of superhuman immune system, right after Jessica had COVID-19, I proceeded to get gastroenteritis. I’ll spare you the details, but let me just say it was not pretty.

Amazingly, Jessica did not catch it. I guess two years of practicing intense hand-washing pays off when a stomach bug comes a-calling.

So all in all, not a great February, even by February’s already low standards.

The one bright spot that I get to enjoy every February is my birthday, just as the month is finishing up. Last year I spent my birthday—the big five oh—in lockdown. But two years ago, right before the world shut down, I had a lovely birthday weekend in Galway. This year, as The Situation began to unwind and de-escalate, I thought it would be good to reprieve that birthday trip.

We went to Galway. We ate wonderful food at Aniar. We listened to some great trad music. We drink some pints. It was good.

But it was hard to enjoy the trip knowing what was happening elsewhere in Europe. I’d blame February for being a bastard again, but in this case the bastard is clearly Vladimar Putin. Fucker.

Just as it’s hard to switch off for a birthday break, it’s equally challenging to go back to work and continue as usual. It feels very strange to be spending the days working on stuff that clearly, in the grand scheme of things, is utterly trivial.

I take some consolation in the fact that everyone else feels this way too, and everyone is united in solidarity with Ukraine. (There are some people in my social media timelines who also feel the need to point out that other countries have been invaded and bombed too. I know it’s not their intention but there’s a strong “all lives matter” vibe to that kind of whataboutism. Hush.)

Anyway. February’s gone. It’s March. Things still feel very grim indeed. But perhaps, just perhaps, there’s a hint of Spring in the air. Winter will not last forever.

2021

2020 was the year of the virus. 2021 was the year of the vaccine …and the virus, obviously, but still it felt like the year we fought back. With science!

Whenever someone writes and shares one of these year-end retrospectives the result is, by its very nature, personal. These last two years are different though. We all still have our own personal perspectives, but we also all share a collective experience of The Ongoing Situation.

Like, I can point to three pivotal events in 2021 and I bet you could point to the same three for you:

  1. getting my first vaccine shot in March,
  2. getting my second vaccine shot in June, and
  3. getting my booster shot in December.

So while on the one hand we’re entering 2022 in a depressingly similar way to how we entered 2021 with COVID-19 running rampant, on the other hand, the odds have changed. We can calculate risk. We’ve got more information. And most of all, we’ve got these fantastic vaccines.

I summed up last year in terms of all the things I didn’t do. I could do the same for 2021, but there’s only one important thing that didn’t happen—I didn’t catch a novel coronavirus.

It’s not like I didn’t take some risks. While I was mostly a homebody, I did make excursions to Zürich and Lisbon. One long weekend in London was particurly risky.

At the end of the year, right as The Omicron Variant was ramping up, Jessica and I travelled to Ireland to see my mother, and then travelled to the States to see her family. We managed to dodge the Covid bullets both times, for which I am extremely grateful. My greatest fear throughout The Situation hasn’t been so much about catching Covid myself, but passing it on to others. If I were to give it to a family member or someone more vulnerable than me, I don’t think I could forgive myself.

Now that we’ve seen our families (after a two-year break!), I’m feeling more sanguine about this next stage. I’ll be hunkering down for the next while to ride out this wave, but if I still end up getting infected, at least I won’t have any travel plans to cancel.

But this is meant to be a look back at the year that’s just finished, not a battle plan for 2022.

There were some milestones in 2021:

  1. I turned 50,
  2. TheSession.org turned 20, and
  3. Adactio.com also turned 20.

This means that my websites are 2/5ths of my own age. In ten years time, my websites will be 1/2 of my own age.

Most of my work activities were necessarily online, though I did manage the aforementioned trips to Switzerland and Portugal to speak at honest-to-goodness real live in-person events. The major projects were:

  1. Publishing season two of the Clearleft podcast in February,
  2. Speaking at An Event Apart Online in April,
  3. Hosting UX Fest in June,
  4. Publishing season three of the Clearleft podcast in September, and
  5. Writing a course on responsive design in November.

Outside of work, my highlights of 2021 mostly involved getting to play music with other people—something that didn’t happen much in 2020. Band practice with Salter Cane resumed in late 2021, as did some Irish music sessions. Both are now under an Omicron hiatus but this too shall pass.

Another 2021 highlight was a visit by Tantek in the summer. He was willing to undergo quarantine to get to Brighton, which I really appreciate. It was lovely hanging out with him, even if all our social activities were by necessity outdoors.

But, like I said, the main achievement in 2021 was not catching COVID-19, and more importantly, not passing it on to anyone else. Time will tell whether or not that winning streak will be sustainable in 2022. But at least I feel somewhat prepared for it now, thanks to those magnificent vaccines.

2021 was a shitty year for a lot of people. I feel fortunate that for me, it was merely uneventful. If my only complaint is that I didn’t get to travel and speak as much I’d like, well boo-fucking-hoo, I’ll take it. I’ve got my health. My family members have their health. I don’t take that for granted.

Maybe 2022 will turn out to be similar—shitty for a lot of people, and mostly unenventful for me. Or perhaps 2022 will be a year filled with joyful in-person activities, like conferences and musical gatherings. Either way, I’m ready.

4 + 3

I work a four-day week now.

It started with the first lockdown. Actually, for a while there, I was working just two days a week while we took a “wait and see” attitude at Clearleft to see how The Situation was going to affect work. We weathered that storm, but rather than going back to a full five-day week I decided to try switching to four days instead.

This meant taking a pay cut. Time is literally money when it comes to work. But I decided it was worth it. That’s a privileged position to be in, I know. I managed to pay off the mortgage on our home last year so that reduced some financial pressure. But I also turned fifty, which made me think that I should really be padding some kind of theoretical nest egg. Still, the opportunity to reduce working hours looked good to me.

The ideal situation would be to have everyone switch to a four-day week without any reduction in pay. Some companies have done that but it wasn’t an option for Clearleft, alas.

I’m not the only one working a four-day week at Clearleft. A few people were doing it even before The Situation. We all take Friday as our non-work day, which makes for a nice long three-day weekend.

What’s really nice is that Friday has been declared a “no meeting” day for everyone at Clearleft. That means that those of us working a four-day week know we’re not missing out on anything and it’s pretty nice for people working a five-day week to have a day free of appointments. We have our end-of-week all-hands wind-down on Thursday afternoons.

I haven’t experienced any reduction in productivity. Quite the opposite. There may be a corollay to Parkinson’s Law: work contracts to fill the time available.

At one time, a six-day work week was the norm. It may well be that a four-day work week becomes the default over time. That could dovetail nicely with increasing automation.

I’ve got to say, I’m really, really liking this. It’s quite nice that when Wednesday rolls around, I can say “it’s almost the weekend!”

A three-day weekend feels normal to me now. I could imagine tilting the balance even more over time. Maybe every few years I could reduce the working by a day or half a day. So instead of going from a full-on five-day working week straight into retirement, it would be more of a gentle ratcheting down over the years.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Work at Clearleft

A little while back, I wrote about how much I like the job description of a design engineer. I still have issues with the “engineer” part, but overall it’s a great way to describe a front-end developer who works on the front of the front end: the outputs that end users interact with: HTML, CSS, and JavaScript. If it’s delivered in a web browser, then it’s design engineering.

Perhaps you also prefer the front of the front end to the back of the front end. Perhaps you also like to spend your time thinking about resilience, performance, and accessibility rather than build pipelines and frameworks. Perhaps you’d like to work with like-minded people.

Clearleft is hiring a midweight design engineer. Perhaps it’s you.

If you’d like to use your development talents in the service of good design, you should apply. And remember, you’d be working for yourself: Clearleft is an employee-owned agency.

You don’t have to be based in Brighton. You can work remotely, although we’re expecting that a monthly face-to-face gathering will become the norm after The Situation ends. So if you’re based somewhere like London, that would work out nicely. That said, if you’re based somewhere like London, this might also be the ideal opportunity to make a move to the seaside.

You do have to be eligible to work in the UK. Alas, that pool has shrunk somewhat. Thanks, Brexit.

Perhaps you think you’re not qualified. Apply anyway. You’ve got nothing to lose.

Perhaps this role isn’t for you, but you know someone who might fit the bill. Please tell them. Spread the word.

We’d especially love to hear from people under-represented in design and technology.

Come and work with us.

The state of UX

There is much introspection and navel-gazing in the world of user experience design. More than usual, I mean.

Jesse James Garrett recently said:

I don’t think I know anyone that’s been in UX more than a decade who’s happy with how it’s going.

In a recent issue of the dConstruct newsletter—which you really should subscribe to—I pointed to three bowls of porridge left out by three different ursine experience designers.

Mark Hurst wrote Why I’m losing faith in UX. Too hot!

Scott Berkun wrote How To Put Faith in Design. Too cold!

Peter Merholz wrote Waking up from the dream of UX. Just right!

As an aside, does it bother anyone else that the Goldilocks story violates the laws of thermodynamics?

Anyway, this hand-wringing around the role of UX today seemed like a suitably hot topic for one of our regular roundtable chats at Clearleft. We invited Peter along too and he was kind enough to give us his time.

It was a fun discussion. Peter pointed out that whenever he hears an older designer bemoaning the current state of design, he has to wonder what’s happened in their lives to make them feel that way (it’s like when people complain about the music of today and how it’s not as good as the music of whatever time period I was a teenager). And let’s face it, the good ol’ days weren’t so good for everyone. It was overwhelmingly dominated by privileged white dudes. The more that changes, the better …and it needs to change far, far more.

There was a general agreement that the current gnashing of teeth isn’t unique to UX. It’s something that just about any discipline will inevitably go through. Peter’s epiphany was to compare it with the hand-wringing around Agile:

The frustration exhibited with the “dream of UX” is (I think) identical to the frustration the original Agile community sees with how it has been industrialized (koff-SAFe-koff).

Perhaps the industrialisation of what once a cottage industry is the price of success. But that’s not necessarily bad, as long as you industrialise the right things. If UX has become the churning out of wireframes at scale, then something has gone very wrong. If UX has become the implementation of dark patterns at scale, then something has gone very wrong.

In some organisations, perhaps that’s exactly what’s happened. In which case, I can totally understand the disillusionment. But in other places, I see the opposite happening. I see UX designers bringing questions of ethics to the forefront. I see UX designers—dare I say it?—having their proverbial seat at the table.

Chris went so far as to claim that we are in fact in a golden age of user experience design. Controversial! But think about it, he said. Over the next few days, pay attention to interactions you have with technology, and consider the thought and skill that has gone into them.

I had Chris’s provocation in mind when I wrote about booking my vaccination appointment:

I just need to get in, accomplish my task, and get out again. This is where the World Wide Web shines.

Maybe Chris is right. Maybe the golden age of UX is here. It’s just not evenly distributed. Yet.

It’s an interesting time for the discipline of user experience design. I’ve always maintained that the best way to get a temperature check for your chosen field is to go to a really good conference. If you’re a UX designer and you want to understand the state of the UX nation, you should get a ticket for the online UX Fest in June. See you there!

Service worker weirdness in Chrome

I think I’ve found some more strange service worker behaviour in Chrome.

It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.

I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch attempt being logged. The only time my service worker script logs anything is in the catch clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:

The service worker navigation preload request failed with a network error.

But all my pages were loading just fine. So where was the error coming from?

After a lot of spelunking and debugging, I think I’ve figured out what’s happening…

First of all, I’m making use of navigation preloads in my service worker. That’s all fine.

Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url. If someone adds the site to their home screen, this is the URL that will open.

Thirdly, Google recently announced that they’re tightening up the criteria for displaying install prompts for progressive web apps. If there’s no network connection, the site still needs to return a 200 OK response: either a cached copy of the URL or a custom offline page.

So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.

I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url value of /, I’ve set a start_url value of /?homescreen. And when the error shows up in the console, the URL being fetched is /?homescreen.

Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.

I thought I’d share this just in case anyone else is logging errors to the console in the catch clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.

Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:

DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals

Much better!

Remote work on the Clearleft podcast

The sixth episode of season two of the Clearleft podcast is available now. The last episode of the season!

The topic is remote work. The timing is kind of perfect. It was exactly one year ago today that Clearleft went fully remote. Having a podcast episode to mark the anniversary seems fitting.

I didn’t interview anyone specifically for this episode. Instead, whenever I was chatting to someone about some other topic—design systems, prototyping, or whatever—I’d wrap up by asking them to describe their surroundings and ask them how they were adjusting to life at home. After two season’s worth of interviews, I had a decent library of responses. So this episode includes voices you last heard from back in season one: Paul, Charlotte, Amy, and Aarron.

Then the episode shifts. I’ve got excerpts from a panel discussion we held a while back on the future of work. These panel discussions used to happen up in London, but this one was, obviously, online. It’s got a terrific line-up: Jean, Holly, Emma, and Lola, all dialing in from different countries and all sharing their stories openly and honestly. (Fun fact: I first met Lola three years ago at the Pixel Up conference in South Africa and on this day in 2018 we were out on Safari together.)

I’m happy with how this episode turned out. It’s a fitting finish to the season. It’s just seventeen and a half minutes long so take a little time out of your day to have a listen.

As always, if you like what you hear, please spread the word.

Done

Remember how I said I was preparing an online conference talk? Well, I’m happy to say that not only is the talk prepared, but I’ve managed to successfully record it too.

If you want to see the finished results, come along to An Event Apart Spring Summit on April 19th. To sweeten the deal, I’ve got a discount code you can use when you buy any multi-day pass: AEAJEREMY.

Recording the talk took longer than I thought it would. I think it was because I said this:

It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary.

Once I got that idea in my head, I think I became a lot fussier about the quality of the recording. “Would David Attenborough allow his documentaries to have the sound of a keyboard audibly being pressed? No! Start again!”

I’m pleased with the final results. And I’m really looking forward to the post-presentation discussion with questions from the audience. The talk gets provocative—and maye a bit ranty—towards the end so it’ll be interesting to see how people react to that.

It feels good to have the presentation finished, but it also feels …weird. It’s like the feeling that conference organisers get once the conference is over. You spend all this time working towards something and then, one day, it’s in the past instead of looming in the future. It can make you feel kind of empty and listless. Maybe it’s the same for big product launches.

The two big projects I’ve been working on for the past few months were this talk and season two of the Clearleft podcast. The talk is in the can and so is the final episode of the podcast season, which drops tomorrow.

On the one hand, it’s nice to have my decks cleared. Nothing work-related to keep me up at night. But I also recognise the growing feeling of doubt and moodiness, just like the post-conference blues.

The obvious solution is to start another big project, something on the scale of making a brand new talk, or organising a conference, or recording another podcast season, or even writing a book.

The other option is to take a break for a while. Seeing as the UK government has extended its furlough scheme, maybe I should take full advantage of it. I went on furlough for a while last year and found it to be a nice change of pace.

When service workers met framesets

Oh boy, do I have some obscure browser behaviour for you!

To set the scene…

I’ve been writing here in my online journal for almost twenty years. The official anniversary will be on September 30th. But this website has been even online longer than that, just in a very different form.

Here’s the first version of adactio.com.

Like a tour guide taking you around the ruins of some lost ancient civilisation, let me point out some interesting features:

  • Observe the .shtml file extension. That means it was once using Apache’s server-side includes, a simple way of repeating chunks of markup across pages. Scientists have been trying to reproduce the wisdom of the ancients using modern technology ever since.
  • See how the layout is 100vw and 100vh? Well, this was long before viewport units existed. In fact there is no CSS at all on that page. It’s one big table element with 100% width and 100% height.
  • So if there’s no CSS, where is the border-radius coming from? Let me introduce you to an old friend—the non-animated GIF. It’s got just enough transparency (though not proper alpha transparency) to fake rounded corners between two solid colours.
  • The management takes no responsibility for any trauma that might befall you if you view source. There you will uncover JavaScript from the dawn of time; ancient runic writing like if (navigator.appName == "Netscape")

Now if your constitution was able to withstand that, brace yourself for what happens when you click on either of the two links, deutsch or english.

You find yourself inside a frameset. You may also experience some disorienting “DHTML”—the marketing term given to any combination of JavaScript and positioning in the late ’90s.

Note that these are not iframes, they are frames. Different thing. You could create single page apps long before Ajax was a twinkle in Jesse James Garrett’s eye.

If you view source, you’ll see a React-like component system. Each frameset component contains frame components that are isolated from one another. They’re like web components. Each frame has its own (non-shadow) DOM. That’s because each frame is actually a separate web page. If you right-click on any of the frames, your browser should give the option to view the framed document in its own tab or window.

Now for the part where modern and ancient technologies collide…

If you’re looking at the frameset URL in Firefox or Safari, everything displays as it should in all its ancient glory. But if you’re looking in Google Chrome and you’ve visited adactio.com before, something very odd happens.

Each frame of the frameset displays my custom offline page. The only way that could be served up is through my service worker script. You can verify this by opening the framest URL in an incognito window—everything works fine when no service worker has been registered.

I have no idea why this is happening. My service worker logic is saying “if there’s a request for a web page, try fetching it from the network, otherwise look in the cache, otherwise show an offline page.” But if those page requests are initiated by a frame element, it goes straight to showing the offline page.

Is this a bug? Or perhaps this is the correct behaviour for some security reason? I have no idea.

I wonder if anyone has ever come across this before. It’s a very strange combination of factors:

  • a domain served over HTTPS,
  • that registers a service worker,
  • but also uses framesets and frames.

I could submit a bug report about this but I fear I would be laughed out of the bug tracker.

Still …the World Wide Web is remarkable for its backward compatibility. This behaviour is unusual because browser makers are at pains to support existing content and never break the web.

Technically a modern website (one that registers a service worker) shouldn’t be using deprecated technology like frames. But browsers still need to be able support those old technologies in order to render old websites.

This situation has only arisen because the same domain—adactio.com—is host to a modern website and a really old one.

Maybe Chrome is behaving strangely because I’ve built my online home on ancient burial ground.

Update: Both Remy and Jake did some debugging and found the issue…

It’s all to do with navigation preloads and the value of event.preloadResponse, which I believe is only supported in Chrome which would explain the differences between browsers.

According to this post by Jake:

event.preloadResponse is a promise that resolves with a response, if:

  • Navigation preload is enabled.
  • The request is a GET request.
  • The request is a navigation request (which browsers generate when they’re loading pages, including iframes).

Otherwise event.preloadResponse is still there, but it resolves with undefined.

Notice that iframes are mentioned, but not frames.

My code was assuming that if event.preloadRepsonse exists in my block of code for responding to page requests, then there’d be a response. But if the request was initiated from a frameset, it is a request for a page and event.preloadRepsonse does exist …but it’s undefined.

I’ve updated my code now to check this assumption (and fall back to fetch).

This may technically still be a bug though. Shouldn’t a page loaded from a frameset count as a navigation request?

In the zone

I went to art college in my younger days. It didn’t take. I wasn’t very good and I didn’t work hard. So I dropped out before they could kick me out.

But I remember one instance where I actually ended up putting in more work than my fellow students—an exceptional situation.

In the first year of art college, we did a foundation course. That’s when you try a bit of everything to help you figure out what you want to concentrate on: painting, sculpture, ceramics, printing, photography, and so on. It was a bit of a whirlwind, which was generally a good thing. If you realised you really didn’t like a subject, you didn’t have to stick it out for long.

One of those subjects was animation—a relatively recent addition to the roster. On the first day, the tutor gave everyone a pack of typing paper: 500 sheets of A4. We were told to use them to make a piece of animation. Put something on the first piece of paper. Take a picture. Now put something slightly different on the second piece of paper. Take a picture of that. Repeat another 498 times. At 24 frames a second, the result would be just over 20 seconds of animation. No computers, no mobile phones. Everything by hand. It was so tedious.

And I loved it. I ended up asking for more paper.

(Actually, this was another reason why I ended up dropping out. I really, really enjoyed animation but I wasn’t able to major in it—I could only take it as a minor.)

I remember getting totally absorbed in the production. It was the perfect mix of tedium and creativity. My mind was simultaneously occupied and wandering free.

Recently I’ve been re-experiencing that same feeling. This time, it’s not in the world of visuals, but of audio. I’m working on season two of the Clearleft podcast.

For both seasons and episodes, this is what the process looks like:

  1. Decide on topics. This will come from a mix of talking to Alex, discussing work with my colleagues, and gut feelings about what might be interesting.
  2. Gather material. This involves arranging interviews with people; sometimes co-workers, sometimes peers in the wider industry. I also trawl through the archives of talks from Clearleft conferences for relevent presentations.
  3. Assemble the material. This is where I’m chipping away at the marble of audio interviews to get at the nuggets within. I play around with the flow of themes, trying different juxtapositions and narrative structures.
  4. Tie everything together. I add my own voice to introduce the topic and segue from point to point.
  5. Release. I upload the audio, update the RSS feed, and publish the transcript.

Lots of podcasts (that I really enjoy) stop at step two: record a conversation and then release it verbatim. Job done.

Being a glutton for punishment, I wanted to do more of an amalgamation for each episode, weaving multiple conversations together.

Right now I’m in step three. That’s where I’ve found the same sweet spot that I had back in my art college days. It’s somewhat mindless work, snipping audio waveforms and adjusting volume levels. At the same time, there’s the creativity of putting those audio snippets into a logical order. I find myself getting into the zone, losing track of time. It’s the same kind of flow state you get from just the right level of coding or design work. Normally this kind of work lends itself to having some background music, but that’s not an option with podcast editing. I’ve got my headphones on, but my ears are busy.

I imagine that is what life is like for an audio engineer or producer.

When I first started the Clearleft podcast, I thought I would need to use GarageBand for this work, arranging multiple tracks on a timeline. Then I discovered Descript. It’s been an enormous time-saver. It’s like having GarageBand and a text editor merged into one. I can see the narrative flow as a text document, as well as looking at the accompanying waveforms.

Descript isn’t perfect. The transcription accuracy is good enough to allow me to search through my corpus of material, but it’s not accurate enough to publish as is. Still, it gives me some nice shortcuts. I can elimate ums and ahs in one stroke, or shorten any gaps that are too long.

But even with all those conveniences, this is still time-consuming work. If I spend three or four hours with my head down sculpting some audio and I get anything close to five minutes worth of usable content, I consider it time well spent.

Sometimes when I’m knee-deep in a piece of audio, trimming and arranging it just so to make a sentence flow just right, there’s a voice in the back of my head that says, “You know that no one is ever going to notice any of this, don’t you?” I try to ignore that voice. I mean, I know the voice is right, but I still think it’s worth doing all this fine tuning. Even if nobody else knows, I’ll have the satisfaction of transforming the raw audio into something a bit more polished.

If you aren’t already subscribed to the RSS feed of the Clearleft podcast, I recommend adding it now. New episodes will start showing up …sometime soon.

Yes, I’m being a little vague on the exact dates. That’s because I’m still in the process of putting the episodes together.

So if you’ll excuse me, I need to put my headphones on and enter the zone.