Journal tags: work

149

sparkline

2021

2020 was the year of the virus. 2021 was the year of the vaccine …and the virus, obviously, but still it felt like the year we fought back. With science!

Whenever someone writes and shares one of these year-end retrospectives the result is, by its very nature, personal. These last two years are different though. We all still have our own personal perspectives, but we also all share a collective experience of The Ongoing Situation.

Like, I can point to three pivotal events in 2021 and I bet you could point to the same three for you:

  1. getting my first vaccine shot in March,
  2. getting my second vaccine shot in June, and
  3. getting my booster shot in December.

So while on the one hand we’re entering 2022 in a depressingly similar way to how we entered 2021 with COVID-19 running rampant, on the other hand, the odds have changed. We can calculate risk. We’ve got more information. And most of all, we’ve got these fantastic vaccines.

I summed up last year in terms of all the things I didn’t do. I could do the same for 2021, but there’s only one important thing that didn’t happen—I didn’t catch a novel coronavirus.

It’s not like I didn’t take some risks. While I was mostly a homebody, I did make excursions to Zürich and Lisbon. One long weekend in London was particurly risky.

At the end of the year, right as The Omicron Variant was ramping up, Jessica and I travelled to Ireland to see my mother, and then travelled to the States to see her family. We managed to dodge the Covid bullets both times, for which I am extremely grateful. My greatest fear throughout The Situation hasn’t been so much about catching Covid myself, but passing it on to others. If I were to give it to a family member or someone more vulnerable than me, I don’t think I could forgive myself.

Now that we’ve seen our families (after a two-year break!), I’m feeling more sanguine about this next stage. I’ll be hunkering down for the next while to ride out this wave, but if I still end up getting infected, at least I won’t have any travel plans to cancel.

But this is meant to be a look back at the year that’s just finished, not a battle plan for 2022.

There were some milestones in 2021:

  1. I turned 50,
  2. TheSession.org turned 20, and
  3. Adactio.com also turned 20.

This means that my websites are 2/5ths of my own age. In ten years time, my websites will be 1/2 of my own age.

Most of my work activities were necessarily online, though I did manage the aforementioned trips to Switzerland and Portugal to speak at honest-to-goodness real live in-person events. The major projects were:

  1. Publishing season two of the Clearleft podcast in February,
  2. Speaking at An Event Apart Online in April,
  3. Hosting UX Fest in June,
  4. Publishing season three of the Clearleft podcast in September, and
  5. Writing a course on responsive design in November.

Outside of work, my highlights of 2021 mostly involved getting to play music with other people—something that didn’t happen much in 2020. Band practice with Salter Cane resumed in late 2021, as did some Irish music sessions. Both are now under an Omicron hiatus but this too shall pass.

Another 2021 highlight was a visit by Tantek in the summer. He was willing to undergo quarantine to get to Brighton, which I really appreciate. It was lovely hanging out with him, even if all our social activities were by necessity outdoors.

But, like I said, the main achievement in 2021 was not catching COVID-19, and more importantly, not passing it on to anyone else. Time will tell whether or not that winning streak will be sustainable in 2022. But at least I feel somewhat prepared for it now, thanks to those magnificent vaccines.

2021 was a shitty year for a lot of people. I feel fortunate that for me, it was merely uneventful. If my only complaint is that I didn’t get to travel and speak as much I’d like, well boo-fucking-hoo, I’ll take it. I’ve got my health. My family members have their health. I don’t take that for granted.

Maybe 2022 will turn out to be similar—shitty for a lot of people, and mostly unenventful for me. Or perhaps 2022 will be a year filled with joyful in-person activities, like conferences and musical gatherings. Either way, I’m ready.

4 + 3

I work a four-day week now.

It started with the first lockdown. Actually, for a while there, I was working just two days a week while we took a “wait and see” attitude at Clearleft to see how The Situation was going to affect work. We weathered that storm, but rather than going back to a full five-day week I decided to try switching to four days instead.

This meant taking a pay cut. Time is literally money when it comes to work. But I decided it was worth it. That’s a privileged position to be in, I know. I managed to pay off the mortgage on our home last year so that reduced some financial pressure. But I also turned fifty, which made me think that I should really be padding some kind of theoretical nest egg. Still, the opportunity to reduce working hours looked good to me.

The ideal situation would be to have everyone switch to a four-day week without any reduction in pay. Some companies have done that but it wasn’t an option for Clearleft, alas.

I’m not the only one working a four-day week at Clearleft. A few people were doing it even before The Situation. We all take Friday as our non-work day, which makes for a nice long three-day weekend.

What’s really nice is that Friday has been declared a “no meeting” day for everyone at Clearleft. That means that those of us working a four-day week know we’re not missing out on anything and it’s pretty nice for people working a five-day week to have a day free of appointments. We have our end-of-week all-hands wind-down on Thursday afternoons.

I haven’t experienced any reduction in productivity. Quite the opposite. There may be a corollay to Parkinson’s Law: work contracts to fill the time available.

At one time, a six-day work week was the norm. It may well be that a four-day work week becomes the default over time. That could dovetail nicely with increasing automation.

I’ve got to say, I’m really, really liking this. It’s quite nice that when Wednesday rolls around, I can say “it’s almost the weekend!”

A three-day weekend feels normal to me now. I could imagine tilting the balance even more over time. Maybe every few years I could reduce the working by a day or half a day. So instead of going from a full-on five-day working week straight into retirement, it would be more of a gentle ratcheting down over the years.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Work at Clearleft

A little while back, I wrote about how much I like the job description of a design engineer. I still have issues with the “engineer” part, but overall it’s a great way to describe a front-end developer who works on the front of the front end: the outputs that end users interact with: HTML, CSS, and JavaScript. If it’s delivered in a web browser, then it’s design engineering.

Perhaps you also prefer the front of the front end to the back of the front end. Perhaps you also like to spend your time thinking about resilience, performance, and accessibility rather than build pipelines and frameworks. Perhaps you’d like to work with like-minded people.

Clearleft is hiring a midweight design engineer. Perhaps it’s you.

If you’d like to use your development talents in the service of good design, you should apply. And remember, you’d be working for yourself: Clearleft is an employee-owned agency.

You don’t have to be based in Brighton. You can work remotely, although we’re expecting that a monthly face-to-face gathering will become the norm after The Situation ends. So if you’re based somewhere like London, that would work out nicely. That said, if you’re based somewhere like London, this might also be the ideal opportunity to make a move to the seaside.

You do have to be eligible to work in the UK. Alas, that pool has shrunk somewhat. Thanks, Brexit.

Perhaps you think you’re not qualified. Apply anyway. You’ve got nothing to lose.

Perhaps this role isn’t for you, but you know someone who might fit the bill. Please tell them. Spread the word.

We’d especially love to hear from people under-represented in design and technology.

Come and work with us.

The state of UX

There is much introspection and navel-gazing in the world of user experience design. More than usual, I mean.

Jesse James Garrett recently said:

I don’t think I know anyone that’s been in UX more than a decade who’s happy with how it’s going.

In a recent issue of the dConstruct newsletter—which you really should subscribe to—I pointed to three bowls of porridge left out by three different ursine experience designers.

Mark Hurst wrote Why I’m losing faith in UX. Too hot!

Scott Berkun wrote How To Put Faith in Design. Too cold!

Peter Merholz wrote Waking up from the dream of UX. Just right!

As an aside, does it bother anyone else that the Goldilocks story violates the laws of thermodynamics?

Anyway, this hand-wringing around the role of UX today seemed like a suitably hot topic for one of our regular roundtable chats at Clearleft. We invited Peter along too and he was kind enough to give us his time.

It was a fun discussion. Peter pointed out that whenever he hears an older designer bemoaning the current state of design, he has to wonder what’s happened in their lives to make them feel that way (it’s like when people complain about the music of today and how it’s not as good as the music of whatever time period I was a teenager). And let’s face it, the good ol’ days weren’t so good for everyone. It was overwhelmingly dominated by privileged white dudes. The more that changes, the better …and it needs to change far, far more.

There was a general agreement that the current gnashing of teeth isn’t unique to UX. It’s something that just about any discipline will inevitably go through. Peter’s epiphany was to compare it with the hand-wringing around Agile:

The frustration exhibited with the “dream of UX” is (I think) identical to the frustration the original Agile community sees with how it has been industrialized (koff-SAFe-koff).

Perhaps the industrialisation of what once a cottage industry is the price of success. But that’s not necessarily bad, as long as you industrialise the right things. If UX has become the churning out of wireframes at scale, then something has gone very wrong. If UX has become the implementation of dark patterns at scale, then something has gone very wrong.

In some organisations, perhaps that’s exactly what’s happened. In which case, I can totally understand the disillusionment. But in other places, I see the opposite happening. I see UX designers bringing questions of ethics to the forefront. I see UX designers—dare I say it?—having their proverbial seat at the table.

Chris went so far as to claim that we are in fact in a golden age of user experience design. Controversial! But think about it, he said. Over the next few days, pay attention to interactions you have with technology, and consider the thought and skill that has gone into them.

I had Chris’s provocation in mind when I wrote about booking my vaccination appointment:

I just need to get in, accomplish my task, and get out again. This is where the World Wide Web shines.

Maybe Chris is right. Maybe the golden age of UX is here. It’s just not evenly distributed. Yet.

It’s an interesting time for the discipline of user experience design. I’ve always maintained that the best way to get a temperature check for your chosen field is to go to a really good conference. If you’re a UX designer and you want to understand the state of the UX nation, you should get a ticket for the online UX Fest in June. See you there!

Service worker weirdness in Chrome

I think I’ve found some more strange service worker behaviour in Chrome.

It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.

I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch attempt being logged. The only time my service worker script logs anything is in the catch clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:

The service worker navigation preload request failed with a network error.

But all my pages were loading just fine. So where was the error coming from?

After a lot of spelunking and debugging, I think I’ve figured out what’s happening…

First of all, I’m making use of navigation preloads in my service worker. That’s all fine.

Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url. If someone adds the site to their home screen, this is the URL that will open.

Thirdly, Google recently announced that they’re tightening up the criteria for displaying install prompts for progressive web apps. If there’s no network connection, the site still needs to return a 200 OK response: either a cached copy of the URL or a custom offline page.

So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.

I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url value of /, I’ve set a start_url value of /?homescreen. And when the error shows up in the console, the URL being fetched is /?homescreen.

Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.

I thought I’d share this just in case anyone else is logging errors to the console in the catch clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.

Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:

DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals

Much better!

Remote work on the Clearleft podcast

The sixth episode of season two of the Clearleft podcast is available now. The last episode of the season!

The topic is remote work. The timing is kind of perfect. It was exactly one year ago today that Clearleft went fully remote. Having a podcast episode to mark the anniversary seems fitting.

I didn’t interview anyone specifically for this episode. Instead, whenever I was chatting to someone about some other topic—design systems, prototyping, or whatever—I’d wrap up by asking them to describe their surroundings and ask them how they were adjusting to life at home. After two season’s worth of interviews, I had a decent library of responses. So this episode includes voices you last heard from back in season one: Paul, Charlotte, Amy, and Aarron.

Then the episode shifts. I’ve got excerpts from a panel discussion we held a while back on the future of work. These panel discussions used to happen up in London, but this one was, obviously, online. It’s got a terrific line-up: Jean, Holly, Emma, and Lola, all dialing in from different countries and all sharing their stories openly and honestly. (Fun fact: I first met Lola three years ago at the Pixel Up conference in South Africa and on this day in 2018 we were out on Safari together.)

I’m happy with how this episode turned out. It’s a fitting finish to the season. It’s just seventeen and a half minutes long so take a little time out of your day to have a listen.

As always, if you like what you hear, please spread the word.

Done

Remember how I said I was preparing an online conference talk? Well, I’m happy to say that not only is the talk prepared, but I’ve managed to successfully record it too.

If you want to see the finished results, come along to An Event Apart Spring Summit on April 19th. To sweeten the deal, I’ve got a discount code you can use when you buy any multi-day pass: AEAJEREMY.

Recording the talk took longer than I thought it would. I think it was because I said this:

It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary.

Once I got that idea in my head, I think I became a lot fussier about the quality of the recording. “Would David Attenborough allow his documentaries to have the sound of a keyboard audibly being pressed? No! Start again!”

I’m pleased with the final results. And I’m really looking forward to the post-presentation discussion with questions from the audience. The talk gets provocative—and maye a bit ranty—towards the end so it’ll be interesting to see how people react to that.

It feels good to have the presentation finished, but it also feels …weird. It’s like the feeling that conference organisers get once the conference is over. You spend all this time working towards something and then, one day, it’s in the past instead of looming in the future. It can make you feel kind of empty and listless. Maybe it’s the same for big product launches.

The two big projects I’ve been working on for the past few months were this talk and season two of the Clearleft podcast. The talk is in the can and so is the final episode of the podcast season, which drops tomorrow.

On the one hand, it’s nice to have my decks cleared. Nothing work-related to keep me up at night. But I also recognise the growing feeling of doubt and moodiness, just like the post-conference blues.

The obvious solution is to start another big project, something on the scale of making a brand new talk, or organising a conference, or recording another podcast season, or even writing a book.

The other option is to take a break for a while. Seeing as the UK government has extended its furlough scheme, maybe I should take full advantage of it. I went on furlough for a while last year and found it to be a nice change of pace.

When service workers met framesets

Oh boy, do I have some obscure browser behaviour for you!

To set the scene…

I’ve been writing here in my online journal for almost twenty years. The official anniversary will be on September 30th. But this website has been even online longer than that, just in a very different form.

Here’s the first version of adactio.com.

Like a tour guide taking you around the ruins of some lost ancient civilisation, let me point out some interesting features:

  • Observe the .shtml file extension. That means it was once using Apache’s server-side includes, a simple way of repeating chunks of markup across pages. Scientists have been trying to reproduce the wisdom of the ancients using modern technology ever since.
  • See how the layout is 100vw and 100vh? Well, this was long before viewport units existed. In fact there is no CSS at all on that page. It’s one big table element with 100% width and 100% height.
  • So if there’s no CSS, where is the border-radius coming from? Let me introduce you to an old friend—the non-animated GIF. It’s got just enough transparency (though not proper alpha transparency) to fake rounded corners between two solid colours.
  • The management takes no responsibility for any trauma that might befall you if you view source. There you will uncover JavaScript from the dawn of time; ancient runic writing like if (navigator.appName == "Netscape")

Now if your constitution was able to withstand that, brace yourself for what happens when you click on either of the two links, deutsch or english.

You find yourself inside a frameset. You may also experience some disorienting “DHTML”—the marketing term given to any combination of JavaScript and positioning in the late ’90s.

Note that these are not iframes, they are frames. Different thing. You could create single page apps long before Ajax was a twinkle in Jesse James Garrett’s eye.

If you view source, you’ll see a React-like component system. Each frameset component contains frame components that are isolated from one another. They’re like web components. Each frame has its own (non-shadow) DOM. That’s because each frame is actually a separate web page. If you right-click on any of the frames, your browser should give the option to view the framed document in its own tab or window.

Now for the part where modern and ancient technologies collide…

If you’re looking at the frameset URL in Firefox or Safari, everything displays as it should in all its ancient glory. But if you’re looking in Google Chrome and you’ve visited adactio.com before, something very odd happens.

Each frame of the frameset displays my custom offline page. The only way that could be served up is through my service worker script. You can verify this by opening the framest URL in an incognito window—everything works fine when no service worker has been registered.

I have no idea why this is happening. My service worker logic is saying “if there’s a request for a web page, try fetching it from the network, otherwise look in the cache, otherwise show an offline page.” But if those page requests are initiated by a frame element, it goes straight to showing the offline page.

Is this a bug? Or perhaps this is the correct behaviour for some security reason? I have no idea.

I wonder if anyone has ever come across this before. It’s a very strange combination of factors:

  • a domain served over HTTPS,
  • that registers a service worker,
  • but also uses framesets and frames.

I could submit a bug report about this but I fear I would be laughed out of the bug tracker.

Still …the World Wide Web is remarkable for its backward compatibility. This behaviour is unusual because browser makers are at pains to support existing content and never break the web.

Technically a modern website (one that registers a service worker) shouldn’t be using deprecated technology like frames. But browsers still need to be able support those old technologies in order to render old websites.

This situation has only arisen because the same domain—adactio.com—is host to a modern website and a really old one.

Maybe Chrome is behaving strangely because I’ve built my online home on ancient burial ground.

Update: Both Remy and Jake did some debugging and found the issue…

It’s all to do with navigation preloads and the value of event.preloadResponse, which I believe is only supported in Chrome which would explain the differences between browsers.

According to this post by Jake:

event.preloadResponse is a promise that resolves with a response, if:

  • Navigation preload is enabled.
  • The request is a GET request.
  • The request is a navigation request (which browsers generate when they’re loading pages, including iframes).

Otherwise event.preloadResponse is still there, but it resolves with undefined.

Notice that iframes are mentioned, but not frames.

My code was assuming that if event.preloadRepsonse exists in my block of code for responding to page requests, then there’d be a response. But if the request was initiated from a frameset, it is a request for a page and event.preloadRepsonse does exist …but it’s undefined.

I’ve updated my code now to check this assumption (and fall back to fetch).

This may technically still be a bug though. Shouldn’t a page loaded from a frameset count as a navigation request?

In the zone

I went to art college in my younger days. It didn’t take. I wasn’t very good and I didn’t work hard. So I dropped out before they could kick me out.

But I remember one instance where I actually ended up putting in more work than my fellow students—an exceptional situation.

In the first year of art college, we did a foundation course. That’s when you try a bit of everything to help you figure out what you want to concentrate on: painting, sculpture, ceramics, printing, photography, and so on. It was a bit of a whirlwind, which was generally a good thing. If you realised you really didn’t like a subject, you didn’t have to stick it out for long.

One of those subjects was animation—a relatively recent addition to the roster. On the first day, the tutor gave everyone a pack of typing paper: 500 sheets of A4. We were told to use them to make a piece of animation. Put something on the first piece of paper. Take a picture. Now put something slightly different on the second piece of paper. Take a picture of that. Repeat another 498 times. At 24 frames a second, the result would be just over 20 seconds of animation. No computers, no mobile phones. Everything by hand. It was so tedious.

And I loved it. I ended up asking for more paper.

(Actually, this was another reason why I ended up dropping out. I really, really enjoyed animation but I wasn’t able to major in it—I could only take it as a minor.)

I remember getting totally absorbed in the production. It was the perfect mix of tedium and creativity. My mind was simultaneously occupied and wandering free.

Recently I’ve been re-experiencing that same feeling. This time, it’s not in the world of visuals, but of audio. I’m working on season two of the Clearleft podcast.

For both seasons and episodes, this is what the process looks like:

  1. Decide on topics. This will come from a mix of talking to Alex, discussing work with my colleagues, and gut feelings about what might be interesting.
  2. Gather material. This involves arranging interviews with people; sometimes co-workers, sometimes peers in the wider industry. I also trawl through the archives of talks from Clearleft conferences for relevent presentations.
  3. Assemble the material. This is where I’m chipping away at the marble of audio interviews to get at the nuggets within. I play around with the flow of themes, trying different juxtapositions and narrative structures.
  4. Tie everything together. I add my own voice to introduce the topic and segue from point to point.
  5. Release. I upload the audio, update the RSS feed, and publish the transcript.

Lots of podcasts (that I really enjoy) stop at step two: record a conversation and then release it verbatim. Job done.

Being a glutton for punishment, I wanted to do more of an amalgamation for each episode, weaving multiple conversations together.

Right now I’m in step three. That’s where I’ve found the same sweet spot that I had back in my art college days. It’s somewhat mindless work, snipping audio waveforms and adjusting volume levels. At the same time, there’s the creativity of putting those audio snippets into a logical order. I find myself getting into the zone, losing track of time. It’s the same kind of flow state you get from just the right level of coding or design work. Normally this kind of work lends itself to having some background music, but that’s not an option with podcast editing. I’ve got my headphones on, but my ears are busy.

I imagine that is what life is like for an audio engineer or producer.

When I first started the Clearleft podcast, I thought I would need to use GarageBand for this work, arranging multiple tracks on a timeline. Then I discovered Descript. It’s been an enormous time-saver. It’s like having GarageBand and a text editor merged into one. I can see the narrative flow as a text document, as well as looking at the accompanying waveforms.

Descript isn’t perfect. The transcription accuracy is good enough to allow me to search through my corpus of material, but it’s not accurate enough to publish as is. Still, it gives me some nice shortcuts. I can elimate ums and ahs in one stroke, or shorten any gaps that are too long.

But even with all those conveniences, this is still time-consuming work. If I spend three or four hours with my head down sculpting some audio and I get anything close to five minutes worth of usable content, I consider it time well spent.

Sometimes when I’m knee-deep in a piece of audio, trimming and arranging it just so to make a sentence flow just right, there’s a voice in the back of my head that says, “You know that no one is ever going to notice any of this, don’t you?” I try to ignore that voice. I mean, I know the voice is right, but I still think it’s worth doing all this fine tuning. Even if nobody else knows, I’ll have the satisfaction of transforming the raw audio into something a bit more polished.

If you aren’t already subscribed to the RSS feed of the Clearleft podcast, I recommend adding it now. New episodes will start showing up …sometime soon.

Yes, I’m being a little vague on the exact dates. That’s because I’m still in the process of putting the episodes together.

So if you’ll excuse me, I need to put my headphones on and enter the zone.

My typical day

Colin wrote about his typical day and suggested I do the same.

Y’know, in the Before Times I think this would’ve been trickier. What with travelling and speaking, I didn’t really have a “typical” day …and I liked it that way. Now, thanks to The Situation, my days are all pretty similar.

  • 8:30am — This is the time I’ve set my alarm for, but sometimes I wake up a bit earlier. I get up, fire up the coffee machine, go to the head and empty my bladder. Maybe I’ll have a shower.
  • 9am — I fire up email and Slack, wishing my co-workers a good morning. Over the course of each day, I’ve usually got short 1:1s booked in with two or three of my colleagues. Just fifteen minutes or so to catch up and find out what they’re working on, what’s interesting, what’s frustrating. The rest of the time, I’ll probably be working on the Clearleft podcast.
  • 1pm — Lunch time. Jessica takes her lunch break at the same time. We’ll usually have a toasted sandwich or a bowl of noodles. While we eat, Jessica will quiz me with the Learned League questions she’s already answered that morning. I get all the fun of testing my knowledge without the pressure of competing.
  • 2pm — If the weather’s okay, we might head out for a brisk walk, probably to the nearby park where we can watch good doggos. Otherwise, it’s back to the podcast mines. I’ve already amassed a fair amount of raw material from interviews, so I’m spending most of my time in Descript, crafting and editing each episode. In about three hours of work, I reckon I get four or five minutes of good audio together. I should really be working on my upcoming talk for An Event Apart too, but I’m procrastinating. But I’m procrastinating by doing the podcast, so I’ve kind of tricked myself into doing something I’m supposed to be doing by avoiding something else I’m supposed to be doing.
  • Sometime between 5pm and 6pm — I knock off work. I pick up my mandolin and play some tunes. If Jessica’s done with work too, we play some tunes together.
  • 7pm — If it’s a Tuesday or Thursday, then it’s a ballet night for Jessica. While she’s in the kitchen doing her class online, I chill out in the living room, enjoying a cold beer, listening to some music with headphones on, and doing some reading or writing. I might fire up NetNewsWire and read the latest RSS updates from my friends, or I might write a blog post.
  • 8pm — If it is a ballet night, then dinner will be something quick and easy to prepare; probably pasta. Otherwise there’s more time to prepare something with care and love. Jessica is the culinary genius so my contributions are mostly just making sure she’s got her mise en place ahead of time, and cleaning up afterwards. I choose a bottle of wine and set the table, and then we sit down to eat together. It is definitely the highlight of the day.
  • 9pm — After cleaning up, I make us both cups of tea and we settle in on the sofa to watch some television. Not broadcast television; something on the Apple TV from Netflix, Amazon Prime, Disney+, or BBC iPlayer most likely. If we’re in the right mood, we’ll watch a film.
  • Sometime between 11pm and midnight — I change into my PJs, brush and floss my teeth, and climb into bed with a good book. When I feel my eyelids getting heavy, I switch off the light and go to sleep. That’s where I’m a Viking!

That’s a typical work day. My work week is Monday to Thursday. I switched over to a four-day week when The Situation hit, and now I don’t ever want to go back. It means making less money, but it’s worth it for a three day weekend.

My typical weekend involves more mandolin playing, more reading, more movies, and even better meals. I’ll also do some chores: clean the floors; back up my data.

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Owning Clearleft

Clearleft turned fifteen this year. We didn’t make a big deal of it. What with The Situation and all, it didn’t seem fitting to be self-congratulatory. Still, any agency that can survive for a decade and a half deserves some recognition.

Cassie marked the anniversary by designing and building a beautiful timeline of Clearleft’s history.

Here’s a post I wrote 15 years ago:

Most of you probably know this already, but I’ve joined forces with Andy and Richard. Collectively, we are known as Clearleft.

I didn’t make too much of a big deal of it back then. I think I was afraid I’d jinx it. I still kind of feel that way. Fifteen years of success? Beginner’s luck.

Despite being one of the three founders, I was never an owner of Clearleft. I let Andy and Rich take the risks and rewards on their shoulders while I take a salary, the same as any other employee.

But now, after fifteen years, I am also an owner of Clearleft.

So is Trys. And Cassie. And Benjamin. And everyone else at Clearleft.

Clearleft is now owned by an employee ownership trust. This isn’t like owning shares in a company—a common Silicon Valley honeypot. This is literally owning the company. Shares are transferable—this isn’t. As long as I’m an employee at Clearleft, I’m a part owner.

On a day-to-day basis, none of this makes much difference. Everyone continues to do great work, the same as before. The difference is in what happens to any profit produced as a result of that work. The owners decide what to do with that profit. The owners are us.

In most companies you’ve got a tension between a board representing the stakeholders and a union representing the workers. In the case of an employee ownership trust, the interests are one and the same. The stakeholders are the workers.

It’ll be fascinating to see how this plays out. Check back again in fifteen years.

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Netlify redirects and downloads

Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.

Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:

  • Open a text editor and type out HTML and CSS.

Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:

  • Time spent setting up build tools: 00:00
  • Time spent wrangling the pipeline to do exactly what you want: 00:00
  • Time spent trying to get the damn build tools to work again when you return to the project after leaving it alone for more than a few months: 00:00:00

I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.

I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.

Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!

The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.

I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a element with an href attribute.

I throw in one more attribute on that link. The download attribute tells the browser that the URL in the href attribute should be downloaded instead of visited. If you give a value for the download attribute, it will over-ride the file name:

<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>

Or you can use it as a Boolean attribute without any value if you’re happy with the file name:

<a href="/files/nice-file-name.xyz" download>download</a>

There’s one catch though. The download attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com but my audio files are hosted on clearleft-audio.s3.amazonaws.com—the download attribute will be ignored and the mp3 files will play in the browser instead of downloading.

Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.

I added a file called _redirects to the root of my project. It contains one line:

/download/*  https://clearleft-audio.s3.amazonaws.com/podcast/:splat  200

That says that any URLs beginning with /download/ should redirect to clearleft-audio.s3.amazonaws.com/podcast/. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.

Oh, and the 200at the end is the status code: okay.

Now I can use this /download/ path in my link:

<a href="/download/season01episode06.mp3" download>Download mp3</a>

Because this URL on the same origin, the download attribute works just fine.

Lightweight

It’s been fascinating to see how television programmes have adapted to The Situation. It’s like there’s been a weird inversion with the YouTube asthetic. Instead of YouTubers doing their utmost to emulate the look of professional television, now everyone on professional television looks like a YouTuber.

No more lighting or audio technicians. No more studio audiences. Heck, no more studios.

There are some kinds of TV programmes that are showing the strain. A lot of comedy formats just fall flat without the usual production values. But a lot of programmes work just fine. In fact, some of them might be better. Watching Mary Beard present Front Row Late from her house is an absolute delight. It feels more direct and honest without the artiface of a television studio. It kind of makes you wonder whether expensive production costs are really necessary when what you really care about is the content.

All of this is one big belaboured metaphor for websites.

In times of crisis, informational websites sometimes offer a “lite” version. Max has even made an emergency website kit:

The site contains only the bare minimum - no webfonts, no tracking, no unnecessary images. The entire thing should fit in a single HTTP request. It’s basically just a small, ultra-lean blog focused on maximum resilience and accessibility. The Service Worker takes it a step further from there so if you’ve visited the site once, the information is still accessible even if you lose network coverage.

Eric emphasises the importance of performance in his post Get Static:

I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me.  As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Tom Loosemore offers this advice to teams building new coronavirus services:

  1. Get a 4 year-old Android phone, and use it as your test/demo device.
  2. https://design-system.service.gov.uk is your friend.
  3. Full React isn’t your friend if it makes your service slow & inaccessible

Remember: This is for everyone.

Indeed, Gov.uk are usually a paragon of best practices in just about any situation. But they dropped the ball recently, as Matthew attests:

coronavirus.data.gov.uk is a static site, fetching and displaying remote data. It is also a 100% client-side JavaScript React site.

http://dracos.co.uk/made/coronavirus.data.gov.uk/ is 238K vs 770K (basics) on load. I’ve removed about 550K of JavaScript. It seems to work the same.

As Tom says:

One sign that your website isn’t meeting the needs of all your users is when Matthew Somerville gets sufficiently grumpy about it to do a proper version himself.

It’s true enough that Matthew excels at creating lightweight, accessible versions of services that are too bloated or buggy to use. His accessible Odeon project from back in the day is legendary. And I use his slimline version of the National Rail website all the time: traintimes.org.uk—it’s a terrificly performant progressive web app.

It’s thankless work though. It flies in the face of everything considered “modern” web development. (If you want to know the cost of “modern” framework-driven JavaScript-first web development, Tim has the numbers.) But Matthew is kind of a hero to me. I wish more developers would follow his example.

Maybe now, with this rush to make lightweight versions of valuable services, we might stop and reflect on whether we ever really needed all those added extras in the first place.

Hope springs eternal.

Update: Matthew has written about his process in Looking at coronavirus.data.gov.uk.

Apple’s attack on service workers

Apple aren’t the best at developer relations. But, bad as their communications can be, I’m willing to cut them some slack. After all, they’re not used to talking with the developer community.

John Wilander wrote a blog post that starts with some excellent news: Full Third-Party Cookie Blocking and More. Safari is catching up to Firefox and disabling third-party cookies by default. Wonderful! I’ve had third-party cookies disabled for a few years now, and while something occassionally breaks, it’s honestly a pretty great experience all around. Denying companies the ability to track users across sites is A Good Thing.

In the same blog post, John said that client-side cookies will be capped to a seven-day lifespan, as previously announced. Just to be clear, this only applies to client-side cookies. If you’re setting a cookie on the server, using PHP or some other server-side language, it won’t be affected. So persistent logins are still doable.

Then, in an audacious example of burying the lede, towards the end of the blog post, John announces that a whole bunch of other client-side storage technologies will also be capped to seven days. Most of the technologies are APIs that, like cookies, can be used to store data: Indexed DB, Local Storage, and Session Storage (though there’s no mention of the Cache API). At the bottom of the list is this:

Service Worker registrations

Okay, let’s clear up a few things here (because they have been so poorly communicated in the blog post)…

The seven day timer refers to seven days of Safari usage, not seven calendar days (although, given how often most people use their phones, the two are probably interchangable). So if someone returns to your site within a seven day period of using Safari, the timer resets to zero, and your service worker gets a stay of execution. Lucky you.

This only applies to Safari. So if your site has been added to the home screen and your web app manifest has a value for the “display” property like “standalone” or “full screen”, the seven day timer doesn’t apply.

That piece of information was missing from the initial blog post. Since the blog post was updated to include this clarification, some people have taken this to mean that progressive web apps aren’t affected by the upcoming change. Not true. Only progressive web apps that have been added to the home screen (and that have an appropriate “display” value) will be spared. That’s a vanishingly small percentage of progressive web apps, especially on iOS. To add a site to the home screen on iOS, you need to dig and scroll through the share menu to find the right option. And you need to do this unprompted. There is no ambient badging in Safari to indicate that a site is installable. Chrome’s install banner isn’t perfect, but it’s better than nothing.

Just a reminder: a progressive web app is a website that

  • runs on HTTPS,
  • has a service worker,
  • and a web manifest.

Adding to the home screen is something you can do with a progressive web app (or any other website). It is not what defines progressive web apps.

In any case, this move to delete service workers after seven days of using Safari is very odd, and I’m struggling to find the connection to the rest of the blog post, which is about technologies that can store data.

As I understand it, with the crackdown on setting third-party cookies, trackers are moving to first-party technologies. So whereas in the past, a tracking company could tell its customers “Add this script element to your pages”, now they have to say “Add this script element and this script file to your pages.” That JavaScript file can then store a unique idenitifer on the client. This could be done with a cookie, with Local Storage, or with Indexed DB, for example. But I’m struggling to understand how a service worker script could be used in this way. I’d really like to see some examples of this actually happening.

The best explanation I can come up with for this move by Apple is that it feels like the neatest solution. That’s neat as in tidy, not as in nifty. It is definitely not a nifty solution.

If some technologies set by a specific domain are being purged after seven days, then the tidy thing to do is purge all technologies from that domain. Service workers are getting included in that dragnet.

Now, to be fair, browsers and operating systems are free to clean up storage space as they see fit. Caches, Local Storage, Indexed DB—all of those are subject to eventually getting cleaned up.

So I was curious. Wanting to give Apple the benefit of the doubt, I set about trying to find out how long service worker registrations currently last before getting deleted. Maybe this announcement of a seven day time limit would turn out to be not such a big change from current behaviour. Maybe currently service workers last for 90 days, or 60, or just 30.

Nope:

There was no time limit previously.

This is not a minor change. This is a crippling attack on service workers, a technology specifically designed to improve the user experience for return visits, whether it’s through improved performance or offline access.

I wouldn’t be so stunned had this announcement come with an accompanying feature that would allow Safari users to know when a website is a progressive web app that can be added to the home screen. But Safari continues to ignore the existence of progressive web apps. And now it will actively discourage people from using service workers.

If you’d like to give feedback on this ludicrous development, you can file a bug (down in the cellar in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”).

No doubt there will still be plenty of Apple apologists telling us why it’s good that Safari has wished service workers into the cornfield. But make no mistake. This is a terrible move by Apple.

I will say this though: given The Situation we’re all living in right now, some good ol’ fashioned Hot Drama by a browser vendor behaving badly feels almost comforting.

Home

Clearleft is a remote-working company right now. I mean, that’s hardly surprising—just about everyone I know is working from home.

We made this decision on Friday. It was clear that the spread of COVID-19 was going exponential (even with the very incomplete data available in the UK). Despite the wishy-washy advice from the government—which has since pivoted drastically—we made the decision to literally get ahead of the curve. We had one final get-together in the studio yesterday to plan logistics and pick up equipment. Then it was time to start this chapter.

I’ve purloined:

  • one Aeron chair,
  • one big monitor,
  • a wired keyboard,
  • a wireless mouse, and
  • noise-cancelling headphones.

Cassie kindly provided the use of her van to get that stuff home. The Aeron chair proved to be extremely tricky to get through my narrow front door. For a while there, it looked like I’d need to take the door off the hinges but with a whole lotta pushin’ and a-pullin’ Jessica and I managed to somehow get it in.

Now I’ve got a reasonable home studio set up, I can get back to working on that conference talk I’m prepar… Oh.

Yeah, I guess I’ve got a stay of execution on that. For the past few months I’ve had my head down preparing a new hour-long talk for An Event Apart. Yes, it takes me that long to put a talk together—it feels kinda like writing a book. I’d like to think it’s because I’m so meticulous, but the truth is that I’m just very slow at most things. Also, I’m a bad one for procrastinating.

For the past week or so, while I’ve been making pretty darn good progress on the talk, a voice in the back of my head has been whispering “Hey, maybe the conference won’t even happen!” In answer to which, the voice in the front of my head has been saying “Would you shut the fuck up? I’m trying to work here!”

I was due to debut the talk at An Event Apart Seattle in May. Sure enough, that event has quite rightly been cancelled. So have a lot of other excellent events. It’s a real shame, and my heart goes out to event organisers who pour so much of themselves into their events; their love, their care, and not least, their financial risk.

I speak at quite a few events every year. I really, really enjoy it. For one, public speaking is one of the few things I think I’m actually any good at. Also, I just love the chance to meet my peers and collectively nerd out together for a short while.

Then there’s the travel. This year I was planning on drastically reducing my plane travel. I had bagged myself speaking slots in European cities that I could reach by train: Cologne, Strasbourg, even Lisbon, along with domestic destinations like Nottingham, Manchester, and Plymouth. I was quite looking forward to some train adventures. But those will have to wait.

Right now I’m going to settle into this new home routine. It’s not entirely new to me. Back in the early 2000s, I worked from home as a freelancer. Back then, Jessica and I worked not just in the same room, but on opposite ends of the same table!

We’ve got more room now. Jessica has her own office space. I’m getting used to mine. But as Jessica pointed out:

I’ve worked from home for 20(!) years, I love it, I’m made for it, I can’t imagine not doing it—and I’ve gotten absolutely nothing done for the past week.

Newly WFH folks, the situation now is totally unconducive to concentration and productivity. Be gentle with yourselves.

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.