Journal tags: frontend

278

sparkline

Secure tunes

The caching strategy for The Session that I wrote about is working a treat.

There are currently about 45,000 different tune settings on the site. One week after introducing the server-side caching, over 40,000 of those settings are already cached.

But even though it’s currently working well, I’m going to change the caching mechanism.

The eagle-eyed amongst you might have raised an eagle eyebrow when I described how the caching happens:

The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.

I knew when I came up with this plan that there was a flaw. The endpoint that receives the markup via Ajax is accepting data from the client. That data could be faked by a malicious actor.

Sure, I’m doing a whole bunch of checks and sanitisation on the server, but there’s always going to be a way of working around that. You can never trust data sent from the client. I was kind of relying on security through obscurity …except it wasn’t even that obscure because I blogged about it.

So I’m switching over to using a headless browser to extract the sheetmusic. You may recall that I wrote:

I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills.

That’s still true. So I’m outsourcing the work to Browserless.

There’s a reason I didn’t go with that solution to begin with. Like I said, over 40,000 tune settings have already been cached. If I had used the Browserless API to do that work, it would’ve been quite pricey. But now that the flood is over and there’s a just a trickle of caching happening, Browserless is a reasonable option.

Anyway, that security hole has now been closed. Thank you to everyone who wrote in to let me know about it. Like I said, I was aware of it, but it was good to have it confirmed.

Funnily enough, the security lesson here is the same as my conclusion when talking about performance:

If that means shifting the work from the browser to the server, do it!

Speedy tunes

Performance is a high priority for me with The Session. It needs to work for people all over the world using all kinds of devices.

My main strategy for ensuring good performance is to diligently apply progressive enhancement. The core content is available to any device that can render HTML.

To keep things performant, I’ve avoided as many assets (or, more accurately, liabilities) as possible. No uneccessary images. No superfluous JavaScript libraries. Not even any web fonts (gasp!). And definitely no third-party resources.

The pay-off is a speedy site. If you want to see lab data, run a page from The Session through lighthouse. To see field data, take a look at data from Chrome UX Report (Crux).

But the devil is in the details. Even though most pages on The Session are speedy, the outliers have bothered me for a while.

Take a typical tune page on the site. The data is delivered from the server as HTML, which loads nice and quick. That data includes the notes for the tune settings, written in ABC notation, a nice lightweight text format.

Then the enhancement happens. Using Paul Rosen’s brilliant abcjs JavaScript library, those ABCs are converted into SVG sheetmusic.

So on tune pages there’s an additional download for that JavaScript library. That’s not so bad though—I’m using a service worker to cache that file so there’ll only ever be one initial network request.

If a tune has just a few different versions, the page remains nice and zippy. But if a tune has lots of settings, the computation starts to add up. Converting all those settings from ABC to SVG starts to take a cumulative toll on the main thread.

I pondered ways to avoid that conversion step. Was there some way of pre-generating the SVGs on the server rather than doing it all on the client?

In theory, yes. I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills, so I’ve taken a slightly different approach.

The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.

Next time someone hits that page, there’s a server-side check to see if the sheetmusic has been cached. If it has, send that down the wire embedded directly in the HTML.

The idea is that over time, most of the sheetmusic on the site will transition from being generated in the browser to being stored on the server.

So far it’s working out well.

Take a really popular tune like The Lark In The Morning. There are twenty settings, and each one has four parts. Previously that would have resulted in a few seconds of parsing and rendering time on the main thread. Now everything is delivered up-front.

I’m not out of the woods. A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.

I’ll keep working on it. But overall, the speed improvement is great. A typical tune page is now very speedy indeed.

It’s like a microcosm of web performance in general: respect your users’ time, network connection and battery life. If that means shifting the work from the browser to the server, do it!

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Performative performance

Web Summer Camp in Croatia finished with an interesting discussion. It was labelled a town-hall meeting, but it was more like an Oxford debating club.

Two speakers had two minutes each to speak for or against a particular statement. Their stances were assigned to them so they didn’t necessarily believe what they said.

One of the propositions was something like:

In the future, sustainable design will be as important as UX or performance.

That’s a tough one to argue against! But that’s what Sophia had to do.

She actually made a fairly compelling argument. She said that real impact isn’t going to come from individual websites changing their colour schemes. Real impact is going to come from making server farms run on renewable energy. She advocated for political action to change the system rather than having the responsibility heaped on the shoulders of the individuals making websites.

It’s a fair point. Much like the concept of a personal carbon footprint started life at BP to distract from corporate responsibility, perhaps we’re going to end up navel-gazing into our individual websites when we should be collectively lobbying for real change.

It’s akin to clicktivism—thinking you’re taking action by sharing something on social media, when real action requires hassling your political representative.

I’ve definitely seen some examples of performative sustainability on websites.

For example, at the start of this particular debate at Web Summer Camp we were shown a screenshot of a municipal website that has a toggle. The toggle supposedly enables a low-carbon mode. High resolution images are removed and for some reason the colour scheme goes grayscale. But even if those measures genuinely reduced energy consumption, it’s a bit late to enact them only after the toggle has been activated. Those hi-res images have already been downloaded by then.

Defaults matter. To be truly effective, the toggle needs to work the other way. Start in low-carbon mode, and only download the hi-res images when someone specifically requests them. (Hopefully browsers will implement prefers-reduced-data soon so that we can have our sustainable cake and eat it.)

Likewise I’ve seen statistics bandied about around the energy-savings that could be made if we used dark colour schemes. I’m sure the statistics are correct, but I’d like to see them presented side-by-side with, say, the energy impact of Google Tag Manager or React or any other wasteful dependencies that impact performance invisibly.

That’s the crux. Most of the important work around energy usage on websites is invisible. It’s the work done to not add more images, more JavaScript or more web fonts.

And it’s not just performance. I feel like the more important the work, the more likely it is to be invisible: privacy, security, accessibility …those matter enormously but you can’t see when a website is secure, or accessible, or not tracking you.

I suspect this is why those areas are all frustratingly under-resourced. Why pour time and effort into something you can’t point at?

Now that I think about it, this could explain the rise of web accessibility overlays. If you do the real work of actually making a website accessible, your work will be invisible. But if you slap an overlay on your website, it looks like you’re making a statement about how much you care about accessibility (even though the overlay is total shit and does more harm than good).

I suspect there might be a similar mindset at work when it comes to interface toggles for low-carbon mode. It might make you feel good. It might make you look good. But it’s a poor substitute for making your website carbon-neutral by default.

Coding prototypes

We do quite a bit of prototyping at Clearleft. There’s no better way to reduce risk than to get something in front of users as quickly as possible to test whether you’re on the right track or not.

As Benjamin said in the podcast episode on prototyping:

It’s something to look at, something to prod. And ideally you’re trying to work out what works and what doesn’t.

Sometimes the prototype is mocked up in Figma. Preferably it’s built in code—HTML, CSS, and JavaScript. Having a prototype built in the materials of the medium helps establish a plausible suspension of disbelief during testing.

Also, as Trys said in that same podcast episode:

Prototypical code isn’t production code. It’s quick and it’s often a little bit dirty and it’s not really fit for purpose in that final deliverable. But it’s also there to be inspiring and to gather a team and show that something is possible.

I can’t reiterate that enough: prototype code isn’t production code.

I’ve written about the two different mindsets before:

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Addy recently wrote an excellent blog post on the topic of prototyping. The value of a prototype is in the insight it imparts, not the code.

It’s crucial to remember that in a prototype, the code serves merely as a medium—a way to facilitate understanding. It’s a means to an end, not the end itself. The code of a prototype is disposable and mutable. In contrast, the lessons learned from a prototype, the insights gained from user interaction and feedback, are far more durable and impactful.

This!

It can be tempting to re-use code from a prototype. I get it. It seems like a waste to throw away code and build something from scratch. But trust me—and I speak from experience here—it will take more time to wrangle prototype code into something that’s production-ready.

The problem is that quality is often invisible. Think about semantics, performance, security, privacy, and accessibility. Those matter—for production code—but they’re under the surface. For someone who doesn’t understand the importance of those hidden qualities a prototype that looks like it works seems ready to ship. It’s understandable that they’d balk at the idea of just throwing that code away and writing new code. Sometimes the suspension of disbelief that a prototype is aiming for works too well.

As is so often the case, this isn’t a technical problem. It’s a communication issue.

Relative times

Last week Phil posted a little update about his excellent site, ooh.directory:

If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.

Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.

I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.

I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.

“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.

Sure enough, we’ve now got the Intl.RelativeTimeFormat object. It’s got browser support across the board.

Here’s the code I wrote.

I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.

You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.

It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.

Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.

You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)

I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.

This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.

Button types

I’ve been banging the drum for a button type="share" for a while now.

I’ve also written about other potential button types. The pattern I noticed was that, if a JavaScript API first requires a user interaction—like the Web Share API—then that’s a good hint that a declarative option would be useful:

The Fullscreen API has the same restriction. You can’t make the browser go fullscreen unless you’re responding to user gesture, like a click. So why not have button type=”fullscreen” in HTML to encapsulate that? And again, the fallback in non-supporting browsers is predictable—it behaves like a regular button—so this is trivial to polyfill.

There’s another “smell” that points to some potential button types: what functionality do browsers provide in their interfaces?

Some browsers provide a print button. So how about button type="print"? The functionality is currently doable with button onclick="window.print()" so this would be a nicer, more declarative way of doing something that’s already possible.

It’s the same with back buttons, forward buttons, and refresh buttons. The functionality is available through a browser interface, and it’s also scriptable, so why not have a declarative equivalent?

How about bookmarking?

And remember, the browser interface isn’t always visible: progressive web apps that launch with minimal browser UI need to provide this functionality.

Šime Vidas was wondering about button type="copy” for copying to clipboard. Again, it’s something that’s currently scriptable and requires a user gesture. It’s a little more complex than the other actions because there needs to be some way of providing the text to be copied, but it’s definitely a valid use case.

  • button type="share"
  • button type="fullscreen"
  • button type="print"
  • button type="bookmark"
  • button type="back"
  • button type="forward"
  • button type="refresh"
  • button type="copy"

Any more?

Days of style and standards

I first spoke at CSS Day in Amsterdam back in 2016. Well, technically it was the HTML Day preceding CSS Day, when I talked about the A element. I spoke at CSS Day again last year, when I gave a presentation about alternative histories of styling.

One of the advantages to having spoken at the event in the past is that I’m offered a complementary ticket to the event every year. That’s an offer I’ve made the most of.

I’ve just returned from the latest iteration of CSS Day. It was, as always, excellent. I’ve said it before and I’ll say it again, but I just love the way that this event treats CSS with the respect it deserves. I always attend thinking “I know CSS”, but I always leave thinking “I learned a lot about CSS!”

The past few years have been incredibly exciting for the language. We’ve been handed feature after feature, including capabilities we were told just weren’t possible: container queries; :has; cascade layers; view transitions!

As Paul points out in his write-up, there’s been a shift in how these features feel too. In the past, the feeling was “there’s some great stuff arriving and it’ll be so cool once we’ve got browser support.” Now the feeling is finally catching up to the reality: these features are here now. If browser support for an exciting feature is still an issue, wait a few weeks.

Mind you, as Paul also points out, maybe that’s down to the decreased diversity in rendering engines. If a feature ships in Chromium, Webkit, and Gecko, then it’s universally supported. On the one hand, that’s great for developers. But on the other hand, it’s not ideal for the ecosystem of the web.

Anyway, as expected, there was a ton of mind-blowing stuff at CSS Day 2023. Most of the talks were deep dives into specific features. Those deep dives were bookended by big-picture opening and closing talks.

Manuel closed out the show by talking about he’s changing the way he writes and thinks about CSS. I think that’s a harbinger of what’s to come in the next year or so. We’ve had this wonderful burst of powerful new features over the past couple of years; I think what we’ll see next is consolidation. Understanding how these separate pieces play well together is going to be very powerful.

Heck, just exploring all the possibilities of custom properties and :has could be revolutionary. When you add in the architectural implications of cascade layers and container queries, it feels like a whole new paradigm waiting to happen.

That was the vibe of Una’s opening talk too. It was a whistle-stop tour of all the amazing features that have already landed, and some that will be with us very soon.

But Una also highlighted the heartbreaking disparity between the brilliant reality of CSS in browsers today versus how the language is perceived.

Look at almost any job posting for front-end development and you’ll see that CSS still isn’t valued as its own skill. Never mind that you could specialise in a subset of CSS—layout, animation, architecture—and provide 10× value to an organisation, the recruiters are going to play it safe and ask you if you know React.

Rachel Nabors and I were chatting about this gap between the real and perceived value of modern CSS. She astutely pointed out that CSS is kind of a victim of its own resilience. The way you wrote CSS ten years ago still works, and will continue to work. That’s by design. Yes, you can write much better, more resilient CSS today, but if those qualities aren’t valued by an organisation, then you’re casting your pearls before swine.

That said, it’s also true that the JavaScript you wrote ten years ago also continues to work today and will continue to work in the future. So why is it that devs seem downright eager to try the latest JavaScript hotness but are reluctant to use CSS that’s been stable for years?

Or perhaps that’s not an accurate representation of the JavaScript ecosystem. It may well be that the eagerness only extends to libraries and frameworks. There’s reluctance to embrace native JavaScript APIs like Proxy or web components. There’s a weird lack of trust in web standards, and an underserved faith in third-party libraries.

Una speculated that CSS needs a rebranding, like we did back in the days of CSS3, a term which didn’t have any technical meaning but helped galvinise excitement.

I’m not so sure. A successful rebranding today becomes a millstone tomorrow. Again, see CSS3.

Una finished with a call-to-action. Let’s work on building the CSS community.

She compared the number of “front-end” conferences dedicated to JavaScript—over 50 listed on one website—to the number of conferences dedicated to CSS. There’s just one. CSS Day.

Heydon wrote:

It occurs to me there are two types of web conferences: know-your-craft conferences and get-ahead conferences. It’s no coincidence there are simultaneously more get-ahead conferences and more JS-framework conferences.

Una encouraged us to organise more gatherings. It doesn’t need to be a conference. It could just be a local meet-up.

I think that’s an excellent suggestion. As Manuel puts it:

My biggest takeaway: The CSS community needs you!

For me, the value of CSS Day was partly in the excellent content being presented, but it was also in the opportunity to gather with like-minded individuals and realise I’m not alone. It’s also too easy to get gaslit by the grift of “modern web development”, which seems to be built on a foundation of user-hostile priorities that don’t make sense to me—over-engineering, intertwingling of concerns, and developer experience über alles. CSS Day was a welcome reminder to fuck that noise.

Five questions

In just a couple of weeks, I’ll be heading to Bristol for Pixel Pioneers. The line-up looks really, really good …with the glaring exception of the opening talk, which I’ll be delivering. But once that’s done, I’m very much looking forward to enjoying the rest of the day’s talks.

There are still tickets available if you fancy joining me.

This will be my second time speaking at this conference. I spoke at the inaugural conference back in 2017 when I gave a talk called Evaluating Technology. This time my talk is called Declarative Design.

A few weeks back, Oliver asked me some questions about my upcoming talk. I figured I’d post my answers here…

Welcome back to Pixel Pioneers! You return with another keynote - how do you manage to stay so ever-enthusiastic about designing for the web?

Well, I’d say my enthusiasm is mixed with frustration. And that’s always been the case. Just as I’ve always found new things that excite me about the World Wide Web, there are just as many things that upset me.

But that’s okay. Both forces can be motivating. When I find myself writing a blog post or preparing a talk, the impetus might be “This is so cool! Check this out!” or it might be “This is so maddening! What’s happening!?” …or perhaps a mix of both.

But to answer your question, the World Wide Web never stays still so there’s always something to get excited about. Equally, the longer the web exists, the more sense it makes to examine the fundamental bedrock—HTML, accessibility,progressive enhancement—and see how they’re just as important as ever. And that’s also something to get excited about!

Without too many spoilers, what can we expect to take away from your talk?

I’m hoping to provide people with a lens that they can use to examine their tools, processes, and approaches to designing for the web. It’s a fairly crude lens—it divides the world into a binary split that I’ve borrowed from the world of programming; imperative and declarative languages. But it’s a surprisingly thought-provoking angle.

Along the way I’ll also be pointing out some of the incredible things that we can do with CSS now. In the past few years there’s been an explosion in capabilities.

But this won’t be a code-heavy presentation. It’s mostly about the ideas. I’ll be referencing some projects by other people that I’m very excited by.

What other web design and development tools, techniques and technologies are you currently most excited about?

Outside of the world of CSS—which is definitely where a lot of the most exciting developments are happening—I’m really interested in the View Transitions API. If it delivers on its promise, it could be a very useful nail in the coffin of uneccessary single page apps. But I’m a little nervous. Right now the implementation only works for single page apps, which makes it an incentive to use that model. I really, really hope that the multipage version ships soon.

But honestly, I probably get most excited about discovering some aspect of HTML that I wasn’t aware of. Even after all these years the language can still surprise me.

And on the flipside, what bugs you most about the web at the moment?

How much time have you got?

Seriously though, the thing that’s really bugged me for the past decade is the increasing complexity of “modern” frontend development when it isn’t driven by user needs. Yes, I’m talking about JavaScript frameworks like React and the assumption that everything should be a single page app.

Honestly, the mindset became so ubiquitous that I felt like I must be missing something. But no, the situation really has spiralled out of control, much to the detriment of end users.

Luckily we’re starting to see the pendulum swing back. The proponents of trickle-down developer convenience are having to finally admit that it’s bollocks.

I don’t care if the move back to making websites is re-labelled as “isomorphic server-rendered multi-page apps.” As long as we make sensible architectural decisions, that’s all that matters.

What’s next, Jeremy?

Right now I’m curating the line-up for this year’s UX London conference which is the week after Pixel Pioneers. As you know, conference curation is a lot of work, but it’s also very rewarding. I’m really proud of the line-up.

It’s been a while since the last season of the Clearleft podcast. I hope to remedy that soon. It takes a lot of effort to make even one episode, but again, it’s very rewarding.

The intersectionality of web performance

Web performance is an unalloyed good. No one has ever complained that a website is too fast.

So the benefit is pretty obvious. Users like fast websites. But there are other benefits to web performance. And they don’t all get equal airtime.

Business

A lot of good web performance practices come down to the first half of Postel’s Law: be conservative in what send. Images, fonts, JavaScript …remove what you don’t need and optimise the hell out of what’s left.

That can translate to savings. If you’re paying for the bandwidth every time a hefty file is downloaded, your monthly bill could get pretty big.

So apart from the indirect business benefits of happy users converting to happy customers, there can be a real nuts’n’bolts bottom-line saving to be made by having a snappy website.

Sustainability

This is related to the cost-savings benefit. If you’re shipping less stuff down the wire, and you’re optimising what you do send, then there’s less energy required.

Whether less energy directly translates to a smaller carbon footprint depends on how the energy is being generated. If your servers are running on 100% renewable energy sources, then reducing the output of your responses won’t reduce your carbon footprint.

But there’s an energy cost at the other end too. Think of all the devices making requests to your server. If you’re making those devices work hard—by downloading, parsing, executing lots of JavaScript, for example—then you’re draining battery life. And you can’t guarantee that the battery will be replenished from renewable energy sources.

That’s why sites like the website carbon calculator have so much crossover with web performance:

From data centres to transmission networks to the billions of connected devices that we hold in our hands, it is all consuming electricity, and in turn producing carbon emissions equal to or greater than the global aviation industry. Yikes!

Inclusivity

There comes a point when a slow website isn’t just inconvenient, it’s inaccessible.

I’ve always liked the German phrase for accessible: barrierefrei—free of barriers. With every file you add to a website’s dependencies, you’re adding one more barrier. Eventually the barrier is insurmountable for people with older devices or slower internet connections. If they can no longer access your website, your website is quite literally inaccessible.

Making the case

I’ve noticed that when it comes to making the argument in favour of better web performance, people often default to the business benefits.

I get it. We’re always being told to speak the language of business. The psychology seems pretty straightforward; if you think that the people you’re trying to convince are mostly concerned with the bottom line, use the language of commerce to change their minds.

But that’s always felt reductive to me.

Sure, those people almost certainly do care about the business. Who doesn’t? But they’re also humans. I feel like if really want to convince them, speak to their hearts. Show them the bigger picture.

Eliel Saarinen said:

Always design a thing by considering it in its next larger context; a chair in a room, a room in a house, a house in an environment, an environment in a city plan.

I think the same could apply to making the case for web performance. Don’t stop at the obvious benefits. Go wider. Show the big-picture implications.

Assumption

While I’m talking about the SVGs on The Session, I thought I’d share something else related to the rendering of the sheet music.

Like I said, I use the brilliant abcjs JavaScript library. It converts ABC notation into sheet music on the fly, which still blows my mind.

If you view source on the rendered SVG, you’ll see that the path and rect elements have been hard-coded with a colour value of #000000. That makes sense. You’d want to display sheet music on a light background, probably white. So it seems like a safe assumption.

Ah, but when it comes to front-end development, assumptions are like little hidden bombs just waiting to go off!

I got an email the other day:

Hi Jeremy,

I have vision problems, so I need to use high-contrast mode (using Windows 11). In high-contrast mode, the sheet-music view is just black!

Doh! All my CSS adapts just fine to high-contrast mode, but those hardcoded hex values in the SVG aren’t going to be affected by high-contrtast mode.

Stepping back, the underlying problem was that I didn’t have a full separation of concerns. Most of my styling information was in my CSS, but not all. Those hex values in the SVG should really be encoded in my style sheet.

I couldn’t remove the hardcoded hex values—not without messing around with JavaScript beyond my comprehension—so I made the fix in CSS:

[fill="#000000"] {
  fill: currentColor;
}
[stroke="#000000"] {
  stroke: currentColor;
}

That seemed to do the trick. I wrote back to the person who had emailed me, and they were pleased as punch:

Well done, Thanks!  The staff, dots, etc. all appear as white on a black background.  When I click “Print”, it looks like it still comes out black on a white background, as expected.

I’m very grateful that they brought the issue to my attention. If they hadn’t, that assumption would still be lying in wait, preparing to ambush someone else.

Workaround

Two weeks ago, I wrote:

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Paul Rosen, who makes abcjs, the JavaScript library that renders sheet music on The Session, managed to get a fix out pretty quickly. But I use an older version of the library and updating it would introduce some side-effects that would take me a while to work around. So that option wasn’t available to me.

In this situation, when the problem is caused by a browser bug, the correct course of action is to file a bug with the browser. That had already been done. Now all I could do was twiddle my thumbs and wait for the next release of the browser, which would hopefully ship with the fix.

But I figured I may as well try to find a temporary workaround in the meantime.

At first, I looked at diving into the internals of the JavaScript—that’s where the instructions are given for drawing the SVGs.

But then I stopped and thought, “If the problem is with the rendering of the SVG, maybe CSS can help.”

I started messing around with SVG-specific CSS properties like stroke, fill, and so on. With dev tools open, I started targeting the paths that acted as bar lines in the sheet music, playing around with widths, opacities, and fills.

It was the debugging equivalent of throwing spaghetti at the wall. Remarkably, it actually worked.

I found a solution with this nonsensical bit of CSS:

stroke: currentColor;
stroke-opacity: 0;

For some reason, rather than making all the barlines disappear, this ensured they were visible.

It’s the worst kind of hacky fix—the kind where you have no idea why it works, but it does.

So I shipped it.

And at pretty much exactly the same time, a new version of Firefox dropped …with the bug fixed.

I can’t deny that there was a certain satisfaction in being able to work around a browser bug. But there’s much more satisfaction in deleting the hacky workaround when it’s no longer needed.

Read-only web apps

The most cartoonish misrepresentation of progressive enhancement is that it means making everything work without JavaScript.

No. Progressive enhancement means making sure your core functionality works without JavaScript.

In my book Resilient Web Design, I quoted Wilto:

Lots of cool features on the Boston Globe don’t work when JS breaks; “reading the news” is not one of them.

That’s an example where the core functionality is readily identifiable. It’s a newspaper. The core functionality is reading the news.

It isn’t always so straightforward though. A lot of services that self-identify as “apps” will claim that even their core functionality requires JavaScript.

Surely I don’t expect Gmail or Google Docs to provide core functionality without JavaScript?

In those particular cases, I actually do. I believe that a textarea in a form would do the job nicely. But I get it. That might take a lot of re-engineering.

So how about this compromise…

Your app should work in a read-only mode without JavaScript.

Without JavaScript I should still be able to read my email in Gmail, even if you don’t let me compose, reply, or organise my messages.

Without JavaScript I should still be able to view a document in Google Docs, even if you don’t let me comment or edit the document.

Even with something as interactive as Figma or Photoshop, I think I should still be able to view a design file without JavaScript.

Making this distinction between read-only mode and read/write mode could be very useful, especially at the start of a project.

Begin by creating the read-only mode that doesn’t require JavaScript. That alone will make for a solid foundation to build upon. Now you’ve built a fallback for any unexpected failures.

Now start adding the read/write functionally. You’re enhancing what’s already there. Progressively.

Heck, you might even find some opportunities to provide some read/write functionality that doesn’t require JavaScript. But if JavaScript is needed, that’s absolutely fine.

So if you’re about to build a web app and you’re pretty sure it requires JavaScript, why not pause and consider whether you can provide a read-only version.

Progressive disclosure with HTML

Robin penned a little love letter to the details element. I agree. It is a joyous piece of declarative power.

That said, don’t go overboard with it. It’s not a drop-in replacement for more complex widgets. But it is a handy encapsulation of straightforward progressive disclosure.

Just last week I added a couple of more details elements to The Session …kind of. There’s a bit of server-side conditional logic involved to determine whether details is the right element.

When you’re looking at a tune, one of the pieces of information you see is how many recordings there of that tune. Now if there are a lot of recordings, then there’s some additional information about which other tunes this one gets recorded with. That information is extra. Mere details, if you will.

You can see it in action on this tune listing. Thanks to the details element, the extra information is available to those who want it, but by default that information is tucked away—very handy for not clogging up that part of the page.

<details>
<summary>There are 181 recordings of this tune.</summary>
This tune has been recorded together with
<ul>
<li>…</li>
<li>…</li>
<li>…</li>
</ul>
</details>

Likewise, each tune page includes any aliases for the tune (in Irish music, the same tune can have many different titles—and the same title can be attached to many different tunes). If a tune has just a handful of aliases, they’re displayed in situ. But once you start listing out more than twenty names, it gets overwhelming.

The details element rides to the rescue once again.

Compare the tune I mentioned above, which only has a few aliases, to another tune that is known by many names.

Again, the main gist is immediately available to everyone—how many aliases are there? But if you want to go through them all, you can toggle that details element open.

You can effectively think of the summary element as the TL;DR of HTML.

<details>
<summary>There are 31 other names for this tune.</summary>
<p>Also known as…</p>
</details>

There’s another classic use of the details element: frequently asked questions. In the case of The Session, I’ve marked up the house rules and FAQs inside details elements, with the rule or question as the summary.

But there’s one house rule that’s most important (“Be civil”) so that details element gets an additional open attribute.

<details open>
<summary>Be civil</summary>
<p>Contributions should be constructive and polite, not mean-spirited or contributed with the intention of causing trouble.</p>
</details>

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

Culture and style

Ever get the urge to style a good document?

No? Just me, then.

Well, the urge came over me recently so I started styling this single-page site:

A Few Notes On The Culture by Iain M Banks

I’ve followed this document across multiple locations over the years. It started life as a newsgroup post on rec.arts.sf.written in 1994. Ken McLeod published it there on Iain M Banks’s behalf.

The post complements the epic series of space opera books that Iain M Banks set in the anarcho-utopian society of The Culture. It’s a fascinating piece of world building, as well as an insight into the author’s mind.

I first became aware of it many few years later, after a copy had been posted to the web. That URL died, but Adrian Hon kept a copy on his site. Lots of copies keep stuff safe, so after contemplating linkrot, I made a copy on this site too.

But I recently thought that maybe it deserved a bit of art direction, so I rolled up my sleeves and started messing around, designing in the browser and following happy little accidents.

The finished result is still fairly sparse. It’s still entirely text, except for a background image that shows up if your screen is wide enough. That image of a planet originally started as an infra-red snapshot of Jupiter by the James Webb Space Telescope that I worked over until it was unrecognisable.

The text itself is the main focus of the design though. I knew I wanted to play around with a variable font. Mona Sans from Github was one of the first ones I tried and I found it instantly suitable. I had a lot of fun playing with different weights and widths.

After a bit of messing around, I realised that the heading styles were reminding me of some later reissues of The Culture novels, so I leant into that, deliberately styling the byline to resemble the treatment of the author’s name on those book covers.

There isn’t all that much CSS. I’ve embedded it in the head of the HTML rather than linking to a separate style sheet, so feel free to view source and poke around in there. You’ll see that I’m making liberal use of custom properties, the clamp function, and logical properties.

Originally I had a light mode and dark mode but I found that the dark mode was much more effective so I ditched the lighter option.

I did make sure to include some judicious styles for print, so if you fancy reading on paper, it should print out nicely.

Oh, and of course it’s a progressive web app that works offline.

I didn’t want to mess with the original document other than making some typographic tweaks to punctuation, but I wanted to break up the single wall of text. I wasn’t about to start using pull quotes on the web so in the end I decided to introduce some headings that weren’t in the original document:

  1. Government
  2. Economics
  3. Technology
  4. Philosophy
  5. Lifestyle
  6. Travel
  7. Habitat
  8. Legal System
  9. Politics
  10. Identity
  11. Nomenclature
  12. Cosmology

If your browser viewport is tall enough, the heading for the current section you’re reading will remain sticky as you scroll. No JavaScript required.

I’m pretty pleased with how this little project turned out. It was certainly fun to experiment with fluid type and a nice variable font.

I can add this to my little collection of single-page websites I’ve whittled over the years:

Three attributes for better web forms

Forms on the web are an opportunity to make big improvements to the user experience with very little effort. The effort can be as little as sprinkling in a smattering of humble HTML attributes. But the result can be a turbo-charged experience for the user, allowing them to sail through their task.

This is particularly true on mobile devices where people have to fill in forms using a virtual keyboard. Any improvement you can make to their flow is worth investigating. But don’t worry: you don’t need to add a complex JavaScript library or write convoluted code. Well-written HTML will get you very far.

If you’re using the right input type value, you’re most of the way there. Browsers on mobile devices can use this value to infer which version of the virtual keyboard is best. So think beyond the plain text value, and use search, email, url, tel, or number when they’re appropriate.

But you can offer more hints to those browsers. Here are three attributes you can add to input elements. All three are enumerated values, which means they have a constrained vocabulary. You don’t need to have these vocabularies memorised. You can look them when you need to.

inputmode

The inputmode attribute is the most direct hint you can give about the virtual keyboard you want. Some of the values are redundant if you’re already using an input type of search, email, tel, or url.

But there might be occasions where you want a keyboard optimised for numbers but the input should also accept other characters. In that case you can use an input type of text with an inputmode value of numeric. This also means you don’t get the spinner controls on desktop browsers that you’d normally get with an input type of number. It can be quite useful to supress the spinner controls for numbers that aren’t meant to be incremented.

If you combine inputmode="numeric" with pattern="[0-9]", you’ll get a numeric keypad with no other characters.

The list of possible values for inputmode is text, numeric, decimal, search, email, tel, and url.

enterkeyhint

Whereas the inputmode attribute provides a hint about which virtual keyboard to show, the enterkeyhint attribute provides an additional hint about one specific key on that virtual keyboard: the enter key.

For search forms, you’ve got an enterkeyhint option of search, and for contact forms, you’ve got send.

The enterkeyhint only changes the labelling of the enter key. On some browsers that label is text. On others it’s an icon. But the attribute by itself doesn’t change the functionality. Even though there are enterkeyhint values of previous and next, by default the enter key will still submit the form. So those two values are less useful on long forms where the user is going from field to field, and more suitable for a series of short forms.

The list of possible values is enter, done, next, previous, go, search, and send.

autocomplete

The autocomplete attribute doesn’t have anything to do with the virtual keyboard. Instead it provides a hint to the browser about values that could pre-filled from the user’s browser profile.

Most browsers try to guess when they can they do this, but they don’t always get it right, which can be annoying. If you explicitly provide an autocomplete hint, browsers can confidently prefill the appropriate value.

Just think about how much time this can save your users!

There’s a name value you can use to get full names pre-filled. But if you have form fields for different parts of names—which I wouldn’t recommend—you’ve also got:

  • given-name,
  • additional-name,
  • family-name,
  • nickname,
  • honorific-prefix, and
  • honorific-suffix.

You might be tempted to use the nickname field for usernames, but no need; there’s a separate username value.

As with names, there’s a single tel value for telephone numbers, but also an array of sub-values if you’ve split telephone numbers up into separate fields:

  • tel-country-code,
  • tel-national,
  • tel-area-code,
  • tel-local, and
  • tel-extension.

There’s a whole host of address-related values too:

  • street-address,
  • address-line1,
  • address-line2, and
  • address-line3, but also
  • address-level1,
  • address-level2,
  • address-level3, and
  • address-level4.

If you have an international audience, addresses can get very messy if you’re trying to split them into separate parts like this.

There’s also postal-code (that’s a ZIP code for Americans), but again, if you have an international audience, please don’t make this a required field. Not every country has postal codes.

Speaking of countries, you’ve got a country-name value, but also a country value for the country’s ISO code.

Remember, the autocomplete value is specifically for the details of the current user. If someone is filling in their own address, use autocomplete. But if someone has specified that, say, a billing address and a shipping address are different, that shipping address might not be the address associated with that person.

On the subject of billing, if your form accepts credit card details, definitely use autocomplete. The values you’ll probably need are:

  • cc-name for the cardholder,
  • cc-number for the credit card number itself,
  • cc-exp for the expiry date, and
  • cc-csc for the security again.

Again, some of these values can be broken down further if you need them: cc-exp-month and cc-exp-year for the month and year of the expiry date, for example.

The autocomplete attribute is really handy for log-in forms. Definitely use the values of email or username as appropriate.

If you’re using two-factor authentication, be sure to add an autocomplete value of one-time-code to your form field. That way, the browser can offer to prefill a value from a text message. That saves the user a lot of fiddly copying and pasting. Phil Nash has more details on the Twilio blog.

Not every mobile browser offers this functionality, but that’s okay. This is classic progressive enhancement. Adding an autocomplete value won’t do any harm to a browser that doesn’t yet understand the value.

Use an autocomplete value of current-password for password fields in log-in forms. This is especially useful for password managers.

But if a user has logged in and is editing their profile to change their password, use a value of new-password. This will prevent the browser from pre-filling that field with the existing password.

That goes for sign-up forms too: use new-password. With this hint, password managers can offer to automatically generate a secure password.

There you have it. Three little HTML attributes that can help users interact with your forms. All you have to do was type a few more characters in your input elements, and users automatically get a better experience.

This is a classic example of letting the browser do the hard work for you. As Andy puts it, be the browser’s mentor, not its micromanager:

Give the browser some solid rules and hints, then let it make the right decisions for the people that visit it, based on their device, connection quality and capabilities.

Chain of tools

I shared this link in Slack with my co-workers today:

Cultivating depth and stillness in research by Andy Matuschak.

I wasn’t sure whether it belonged in the #research or the #design channel. While it’s ostensibly about research, I think it applies to design more broadly. Heck, it probably applies to most fields. I should have put it in the Slack channel I created called #iiiiinteresting.

The article is all about that feeling of frustration when things aren’t progressing quickly, even when you know intellectually that not everything should always progress quickly.

The article is filled with advice for battling this feeling, including this observation on curiosity:

Curiosity can also totally change my relationship to setbacks. Say I’ve run an experiment, collected the data, done the analysis, and now I’m writing an essay about what I’ve found. Except, halfway through, I notice that one column of the data really doesn’t support the conclusion I’d drawn. Oops. It’s tempting to treat this development as a frustrating impediment—something to be overcome expediently. Of course, that’s exactly the wrong approach, both emotionally and epistemically. Everything becomes much better when I react from curiosity instead: “Oh, wait, wow! Fascinating! What is happening here? What can this teach me? How might this change what I try next?”

But what really resonated with me was this footnote attached to that paragraph:

I notice that I really struggle to generate curiosity about problems in programming. Maybe it’s because I’ve been doing it so long, but I think it’s because my problems are usually with ephemeral ideas, incidental to what I actually care about. When I’m fighting some godforsaken Javascript build system, I don’t feel even slightly curious to “really” understand those parochial machinations. I know they’re just going to be replaced by some new tool next year.

I feel seen.

I know I’m not alone. I know people who were driven out of front-end development because they felt the unspoken ultimatum was to either become a “full stack” developer or see yourself out.

Remember Chris’s excellent post, The Great Divide? Zach referenced it recently. He wrote:

The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?

Mostly I feel sad about the talented people we’ve lost because they felt their front-of-the-front-end work wasn’t valued.

But wait! Can I turn my frown upside down? Can I take Andy Matuschak’s advice and say, “Oh, wait, wow! Fascinating! What is happening here? What can this teach me?”

Here’s one way of squinting at the situation…

There’s an opportunity here. If many people—myself included—feel disheartened and ground down by the amount of time they need to spend dealing with toolchains and build systems, what kind of system would allow us to get on with making websites without having to deal with that stuff?

I’m not proposing that we get rid of these complex toolchains, but I am wondering if there’s a way to make it someone else’s job.

I guess this job is DevOps. In theory it’s a specialised field. In practice everyone adding anything to a codebase partakes in continual partial DevOps because they must understand the toolchains and build processes in order to change one line of HTML.

I’m not saying “Don’t Make Me Think” when it comes to the tooling. I totally get that some working knowledge is probably required. But the ratio has gotten out of whack. You need a lot of working knowledge of the toolchains and build processes.

In fact, that’s mostly what companies hire for these days. If you’re well versed in HTML, CSS, and vanilla JavaScript, but you’re not up to speed on pipelines and frameworks, you’re going to have a hard time.

That doesn’t seem right. We should change it.

Negativity bias

When I wrote about my hopes and fears for the View Transitions API, a few people latched on to this sentiment:

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

But I also wrote:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

I think it’s worth focusing on that.

Part of the problem is that I gave my hopes and fears an equal airing. But they’re not equally likely.

Take the possibility that the View Transitions API only ships for single page apps, but never ships for regular page transitions. The consequences of that would be big—the API would act as an incentive to build single page apps. But the likelihood of that happening is small. In fact, according to Jake, there’s already an implemention for page transitions in the works at Chrome.

Now what if the View Transitions API ships for pages? The consequences would be equally big—the API would act as an incentive to ditch single page apps and build in a more performant, resilient way. Best of all, the chances of that happening are very large indeed (pretty much a certainty now, given Jake’s update).

So I made a comparison between both of the consequences, which are equally large, but I didn’t make a corresponding comparison of the likelihoods, which are not equally large. Mea culpa!

I should’ve made it clearer that, although the consequences would be really bad if the View Transitions API only supports single page apps, the actual likelihood of that is pretty slim.

That’s probably my negativity bias showing through. (The reason I have a negativity bias is because I am a human. Like, have you ever noticed that if you get feedback on something and 98% of it is positive, you inevitably fixate on the 2%?)

Anyway, the real takeaway here is that if the View Transitions API ships for pages, then the consequences will be really, really good! It would be another nail in the coffin for monolithic JavaScript frameworks slowing down the web. And best of all, the likelihood of this happening is very high!

So let me amend my closing sentences from my previous post:

If the View Transitions API only works for single page apps—which is very unlikely—it could be the single worst thing to happen to the web in years.

If the View Transitions API works across page navigations—which is very, very likely—it could be the single best thing to happen to the web in years.

The glass is half full and it’s only going to get fuller. Time to start planning for a turbo-charged web now.

If you’ve got a website with full page navigations, start thinking about how you’ll be able to apply the View Transitions API as a progressive enhancement to improve the user experience.

If you’ve got a single page app, start thinking about how to ditch a whole bunch of uneccessary dependencies to make a more lightweight foundation of HTML instead of JavaScript, and still get all those slick transitions you get in a single page app!

Time for transitions

I am simultaneously very excited and very nervous about the View Transitions API.

You may know it by its former name—Shared Element Transitions. The name change is very recent.

I’ve been saying for years that some kind of API like this would be brilliant:

I honestly think if browsers implemented this, 80% of client-rendered Single Page Apps could be done as regular good ol’-fashioned websites.

Miriam Suzanne describes the theory of View Transitions succinctly:

Shared-element transitions are designed to work with standard web navigation across multiple page loads, as well as page transitions in ‘single-page’ apps (often called SPAs).

This all sounds brilliant. But the devil is in the implementation details. Right now, the API only works for single page apps. This is totally understandable. For purely pragmatic reasons, single page apps are a simple use case to solve for. It’s going to take a lot more work to get this API to work for multi-page apps (or as we used to call them, websites).

If we get a View Transitions API that works across page navigations, it could potentially turbo-charge the web. It will act as a disencentive to building single page apps—you’d be able to provide swish transitions without sacrificing performance or resilience at the alter of a heavy-handed JavaScript-only architecture.

But if the API only ever works for single page apps (which is the current situation), then it will act as an incentive to make any kind of website into a single page app, regardless of whether it’s actually the appropriate architecture.

That prospect has me very worried indeed.

I’m making my feelings on this known just in case any of the implementators out there are thinking, “Hey, maybe it’s fine that this API only works for single page apps—I’m sure most people would be happy with that.”

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Update: Jake says:

We’re currently landing code in Chrome for the MPA version.

Very happy to hear that! It’s already in the spec, but it’s good to hear that the implementation isn’t going to lag too much.

Also, read this follow-up.