Journal tags: frontend

197

sparkline

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

ARIA in CSS

Sara tweeted something recently that resonated with me:

Also, Pro Tip: Using ARIA attributes as CSS hooks ensures your component will only look (and/or function) properly if said attributes are used in the HTML, which, in turn, ensures that they will always be added (otherwise, the component will obv. be broken)

Yes! I didn’t mention it when I wrote about accessible interactions but this is my preferred way of hooking up CSS and JavaScript interactions. Here’s old Codepen where you can see it in action:

[aria-hidden='true'] {
  display: none;
}

In order for the functionality to work for everyone—screen reader users or not—I have to make sure that I’m toggling the value of aria-hidden in my JavaScript.

There’s another advantage to this technique. Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML). If something goes wrong with the JavaScript, the aria-hidden value isn’t set to “true”, which means that the CSS never kicks in. So the default state is for content to be displayed. There’s no assumption that the JavaScript has to work in order for the CSS to make sense.

It’s almost as though accessibility and progressive enhancement are connected somehow…

Accessible interactions

Accessibility on the web is easy. Accessibility on the web is also hard.

I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:

  • Unlabelled links and buttons
  • No image descriptions
  • Poor use of headings
  • Inaccessible web forms
  • Auto-playing audio and video

None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).

Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.

This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.

When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.

The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:

  • accordions,
  • menus (including mega menu monstrosities),
  • modal dialogs,
  • tabs.

In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.

The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount divs, spans, and most other elements immediately; use a trigger that’s focusable and interactive by default).

As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.

That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?

Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?

If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.

<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.

<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>

Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.

The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.

So I can answer my first two questions:

  • “Should the trigger be a link or button?” and
  • “Should the target be automatically focused?”

…by answering a different question:

  • “How far away from each other are the trigger and the target?”

It’s not a hard and fast rule, but it helps me out when I’m unsure.

At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener(), some tabindex action, and maybe a focus() method.

Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden attribute on the target.

When the target isn’t being shown:

  • the trigger has aria-expanded="false",
  • the target has aria-hidden="true".

When the target is shown:

  • the trigger has aria-expanded="true",
  • the target has aria-hidden="false".

There’s also an aria-controls attribute that allows me to explicitly associate the trigger and the target:

<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But don’t assume that’s going to help you. As Heydon put it, aria-controls is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.

Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.

At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.

I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)

Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.

And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?

Standards processing

I’ve been like a dog with a bone the way I’ve been pushing for a declarative option for the Web Share API in the shape of button type=“share”. It’s been an interesting window into the world of web standards.

The story so far…

That’s the situation currently. The general consensus seems to be that it’s probably too soon to be talking about implementation at this stage—the Web Share API itself is still pretty new—but gathering data to inform future work is good.

In planning for the next TPAC meeting (the big web standards gathering), Marcos summarised the situation like this:

Not blocking: but a proposal was made by @adactio to come up with a declarative solution, but at least two implementers have said that now is not the appropriate time to add such a thing to the spec (we need more implementation experience + and also to see how devs use the API) - but it would be great to see a proposal incubated at the WICG.

Now this where things can get a little confusing because it used to be that if you wanted to incubate a proposal, you’d have to do on Discourse, which is a steaming pile of crap that requires JavaScript in order to put text on a screen. But Šime pointed out that proposals can now be submitted on Github.

So that’s where I’ve submitted my proposal, linking through to the explainer document.

Like I said, I’m not expecting anything to happen anytime soon, but it would be really good to gather as much data as possible around existing usage of the Web Share API. If you’re using it, or you know anyone who’s using it, please, please, please take a moment to provide a quick description. And if you could help spread the word to get that issue in front of as many devs as possible, I’d be very grateful.

(Many thanks to everyone who’s already contributed to that issue—much appreciated!)

Continuous partial browser support

Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.

That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.

Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path if you want to support Safari. It’s been like that for seven years and counting.

Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.

Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.

The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:

  • Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
  • Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.

I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.

That said…

You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.

And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports in the case of CSS). Try to use the feature as an enhancement rather than a dependency.

If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

You can also do it for well-established features like querySelector, addEventListener, and geolocation. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.

The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.

Saving forms

I added a long-overdue enhancement to The Session recently. Here’s the scenario…

You’re on a web page with a comment form. You type your well-considered thoughts into a textarea field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.

This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.

My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage, IndexedDB, localStorage. It was clear that sessionStorage wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB or localStorage. IndexedDB is the more versatile and powerful—because it’s asynchronous—but localStorage is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.

Alright, so I’m going to store the contents of a form in localStorage. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea. I can store other form fields too. Even though localStorage technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).

Now I know what I’m going to store (the textarea contents) and how I’m going to store it (localStorage). The next question is when should I do it?

I could play it safe and store the comment whenever the user presses a key within the textarea. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.

Alright then, I’ll use the unload event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.

So beforeunload then? Well, maybe. But modern browsers also support a pagehide event that looks like a better option.

In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea. And in order to know if the user has interacted with the textarea, I’m back to listening for key presses again.

But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea.

Handily, addEventListener accepts an object of options. One of those options is called “once”. If I set that to true, then the event listener is only fired once.

So I set up a cascade of event listeners. If the user types anything into the textarea, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea contents are put into localStorage.

I’ve abstracted my code into a gist. Here’s what it does:

  1. Cut the mustard. If this browser doesn’t support localStorage, bail out.
  2. Set the localStorage key to be the current URL.
  3. If there’s already an entry for the current URL, update the textarea with the value in localStorage.
  4. Write a function to store the contents of the textarea in localStorage but don’t call the function yet.
  5. The first time that a key is pressed inside the textarea, start listening for the page being unloaded.
  6. When the page is being unloaded, invoke that function that stores the contents of the textarea in localStorage.
  7. When the form is submitted, remove the entry in localStorage for the current URL.

That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.

Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.

The reason for a share button type

If you’re at all interested in what I wrote about a declarative Web Share API—and its sequel, a polyfill for button type=”share”—then you might be interested in an explainer document I’ve put together.

It’s a useful exercise for me to enumerate the reasoning for button type=“share” in one place. If you have any feedback, feel free to fork it or create an issue.

The document is based on my initial blog posts and the discussion that followed in this issue on the repo for the Web Share API. In that thread I got some pushback from Marcos. There are three points he makes. I think that two of them lack merit, but the third one is actually spot on.

Here’s the first bit of pushback:

Apart from placing a button in the content, I’m not sure what the proposal offers over what (at least one) browser already provides? For instance, Safari UI already provides a share button by default on every page

But that is addressed in the explainer document for the Web Share API itself:

The browser UI may not always be available, e.g., when a web app has been installed as a standalone/fullscreen app.

That’s exactly what I wanted to address. Browser UI is not always available and as progressive web apps become more popular, authors will need to provide a way for users to share the current URL—something that previously was handled by browsers.

That use-case of sharing the current page leads nicely into the second bit of pushback:

The API is specialized… using it to share the same page is kinda pointless.

But again, the explainer document for the Web Share API directly contradicts this:

Sharing the page’s own URL (a very common case)…

Rather than being a difference of opinion, this is something that could be resolved with data. I’d really like to find out how people are currently using the Web Share API. How much of the current usage falls into the category of “share the current page”? I don’t know the best way to gather this data though. If you have any ideas, let me know. I’ve started an issue where you can share how you’re using the Web Share API. Or if you’re not using the Web Share API, but you know someone who is, please let them know.

Okay, so those first two bits of pushback directly contradict what’s in the explainer document for the Web Share API. The third bit of pushback is more philosophical and, I think, more interesting.

The Web Share API explainer document does a good job of explaining why a declarative solution is desirable:

The link can be placed declaratively on the page, with no need for a JavaScript click event handler.

That’s also my justification for having a declarative alternative: it would be easier for more people to use. I said:

At a fundamental level, declarative technologies have a lower barrier to entry than imperative technologies.

But Marcos wrote:

That’s demonstrably false and a common misconception: See OWL, XForms, SVG, or any XML+namespace spec. Even HTML is poorly understood, but it just happens to have extremely robust error recovery (giving the illusion of it being easy). However, that’s not a function of it being “declarative”.

He’s absolutely right.

It’s not so much that I want a declarative option—I want an option that has robust error recovery. After all, XML is a declarative language but its error handling is as strict as an imperative language like JavaScript: make one syntactical error and nothing works. XML has a brittle error-handling model by design. HTML and CSS have extremely robust error recovery by design. It’s that error-handling model that gives HTML and CSS their robustness.

I’ve been using the word “declarative” when I actually meant “robust in handling errors”.

I guess that when I’ve been talking about “a declarative solution”, I’ve been thinking in terms of the three languages parsed by browsers: HTML, CSS, and JavaScript. Two of those languages are declarative, and those two also happen to have much more forgiving error-handling than the third language. That’s the important part—the error handling—not the fact that they’re declarative.

I’ve been using “declarative” as a shorthand for “either HTML or CSS”, but really I should try to be more precise in my language. The word “declarative” covers a wide range of possible languages, and not all of them lower the barrier to entry. A declarative language with a brittle error-handling model is as daunting as an imperative language.

I should try to use a more descriptive word than “declarative” when I’m describing HTML or CSS. Resilient? Robust?

With that in mind, button type=“share” is worth pursuing. Yes, it’s a declarative option for using the Web Share API, but more important, it’s a robust option for using the Web Share API.

I invite you to read the explainer document for a share button type and I welcome your feedback …especially if you’re currently using the Web Share API!

Downloading from Google Fonts

If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.

But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.

This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.

I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.

Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?

It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.

Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.

Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?

Like I said, it’s all very strange.

Unobtrusive feedback

Ten years ago I gave a talk at An Event Apart all about interaction design. It was called Paranormal Interactivity. You can watch the video, listen to the audio or read the transcript if you like.

I think it holds up pretty well. There’s one interaction pattern in particular that I think has stood the test of time. In the talk, I introduce this pattern as something you can see in action on Huffduffer:

I was thinking about how to tell the user that something’s happened without distracting them from their task, and I thought beyond the web. I thought about places that provide feedback mechanisms on screens, and I thought of video games.

So we all know Super Mario, right? And if you think about when you’re collecting coins in Super Mario, it doesn’t stop the game and pop up an alert dialogue and say, “You have just collected ten points, OK, Cancel”, right? It just does it. It does it in the background, but it does provide you with a feedback mechanism.

The feedback you get in Super Mario is about the number of points you’ve just gained. When you collect an item that gives you more points, the number of points you’ve gained appears where the item was …and then drifts upwards as it disappears. It’s unobtrusive enough that it won’t distract you from the gameplay you’re concentrating on but it gives you the reassurance that, yes, you have just gained points.

I think this a neat little feedback mechanism that we can borrow for subtle Ajax interactions on the web. These are actions that don’t change much of the content. The user needs to be able to potentially do lots of these actions on a single page without waiting for feedback every time.

On Huffduffer, for example, you might be looking at a listing of people that you can choose to follow or unfollow. The mechanism for doing that is a button per person. You might potentially be clicking lots of those buttons in quick succession. You want to know that each action has taken effect but you don’t want to be interrupted from your following/unfollowing spree.

You get some feedback in any case: the button changes. Maybe the text updates from “follow” to “unfollow” accompanied by a change in colour (this is what you’ll see on Twitter). The Super Mario style feedback is in addition to that, rather than instead of.

I’ve made a Codepen so you can see a reduced test case of the Super Mario feedback in action.

See the Pen Unobtrusive feedback by Jeremy Keith (@adactio) on CodePen.

Here’s the code available as a gist.

It’s a function that takes two arguments: the element that the feedback originates from (pass in a DOM node reference for this), and the contents of the feedback (this can be a string of text or it can be HTML …or SVG). When you call the function with those two arguments, this is what happens:

  1. The JavaScript generates a span element and puts the feedback contents inside it.
  2. Then it positions that element right over the element that the feedback originates from.
  3. Then there’s a CSS transform. The feedback gets a translateY applied so it drifts upward. At the same time it gets its opacity reduced from 1 to 0 so it’s fading away.
  4. Finally there’s a transitionend event that fires when the animation is over. Once that event fires, the generated span is destroyed.

When I first used this pattern on Huffduffer, I’m pretty sure I was using jQuery. A few years later I rewrote it in vanilla JavaScript. That was four years ago so I wonder if the code could be improved. Have a go if you fancy it.

Still, even if the code could benefit from an update, I’m pleased that the underlying pattern still holds true. I used it recently on The Session and it’s working a treat for a new Ajax interaction there (bookmarking or unbookbarking an item).

If you end up using this unobtrusive feedback pattern anyway, please let me know—I’d love to see more examples of it in the wild.

Performance and people

I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.

Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).

It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).

I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.

This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.

The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using link rel=“preload” in the HTML) or adjusting the font loading (using font-display in the CSS) or even nerdier stuff like subsetting.

In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that @font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!

The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.

But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.

I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.

But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.

It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!

At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

Remember when I did a countdown of the top four web performance challenges? At the number one spot is other people’s JavaScript.

Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!

More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.

Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.

Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?

I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.

Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.

A polyfill for button type=”share”

After writing about a declarative Web Share API here yesterday I thought I’d better share the idea (see what I did there?).

I opened an issue on the Github repo for the spec.

(I hope that’s the right place for this proposal. I know that in the past ideas were kicked around on the Discourse site for Web platform Incubator Community Group but I can’t stand Discourse. It literally requires JavaScript to render anything to the screen even though the entire content is text. If it turns out that that is the place I should’ve posted, I guess I’ll hold my nose and do it using the most over-engineered reinvention of the browser I’ve ever seen. But I believe that the plan is for WICG to migrate proposals to Github anyway.)

I also realised that, as the JavaScript Web Share API already exists, I can use it to polyfill my suggestion for:

<button type="share">

The polyfill also demonstrates how feature detection could work. Here’s the code.

This polyfill takes an Inception approach to feature detection. There are three nested levels:

  1. This browser supports button type="share". Great! Don’t do anything. Otherwise proceed to level two.
  2. This browser supports the JavaScript Web Share API. Use that API to share the current page URL and title. Otherwise proceed to level three.
  3. Use a mailto: link to prefill an email with the page title as the subject and the URL in the body. Ya basic!

The idea is that, as long as you include the 20 lines of polyfill code, you could start using button type="share" in your pages today.

I’ve made a test page on Codepen. I’m just using plain text in the button but you could use a nice image or SVG or combination. You can use the Codepen test page to observe two of the three possible behaviours browsers could exhibit:

  1. A browser supports button type="share". Currently that’s none because I literally made this shit up yesterday.
  2. A browser supports the JavaScript Web Share API. This is Safari on Mac, Edge on Windows, Safari on iOS, and Chrome, Samsung Internet, and Firefox on Android.
  3. A browser supports neither button type="share" nor the existing JavaScript Web Share API. This is Firefox and Chrome on desktop (and Edge if you’re on a Mac).

See the Pen Polyfill for button type=”share" by Jeremy Keith (@adactio) on CodePen.

The polyfill doesn’t support Internet Explorer 11 or lower because it uses the DOM closest() method. Feel free to fork and rewrite if you need to support old IE.

A declarative Web Share API

I’ve written about the Web Share API before. It’s a handy little bit of JavaScript that—if supported—brings up a system-level way of sharing a page. Seeing as it probably won’t be long before we won’t be able to see full URLs in browsers anymore, it’s going to fall on us as site owners to provide this kind of fundamental access.

Right now the Web Share API exists entirely in JavaScript. There are quite a few browser APIs like that, and it always feels like a bit of a shame to me. Ideally there should be a JavaScript API and a declarative option, even if the declarative option isn’t as powerful.

Take form validation. To cover the most common use cases, you probably only need to use declarative markup like input type="email" or the required attribute. But if your use case gets a bit more complicated, you can reach for the Constraint Validation API in JavaScript.

I like that pattern. I wish it were an option for JavaScript-only APIs. Take the Geolocation API, for example. Right now it’s only available through JavaScript. But what if there were an input type="geolocation" ? It wouldn’t cover all use cases, but it wouldn’t have to. For the simple case of getting someone’s location (like getting someone’s email or telephone number), it would do. For anything more complex than that, that’s what the JavaScript API is for.

I was reminded of this recently when Ada Rose Cannon tweeted:

It really feels like there should be a semantic version of the share API, like a mailto: link

I think she’s absolutely right. The Web Share API has one primary use case: let the user share the current page. If that use case could be met in a declarative way, then it would have a lower barrier to entry. And for anyone who needs to do something more complicated, that’s what the JavaScript API is for.

But Pete LePage asked:

How would you feature detect for it?

Good question. With the JavaScript API you can do feature detection—if the API isn’t supported you can either bail or provide your own implementation.

There a few different ways of extending HTML that allow you to provide a fallback for non-supporting browsers.

You could mint a new element with a content model that says “Browsers, if you do support this element, ignore everything inside the opening and closing tags.” That’s the way that the canvas element works. It’s the same for audio and video—they ignore everything inside them that isn’t a source element. So developers can provide a fallback within the opening and closing tags.

But forging a new element would be overkill for something like the Web Share API (or Geolocation). There are more subtle ways of extending HTML that I’ve already alluded to.

Making a new element is a lot of work. Making a new attribute could also be a lot of work. But making a new attribute value might hit the sweet spot. That’s why I suggested something like input type="geolocation" for the declarative version of the Geolocation API. There’s prior art here; this is how we got input types for email, url, tel, color, date, etc. The key piece of behaviour is that non-supporting browsers will treat any value they don’t understand as “text”.

I don’t think there should be input type="share". The action of sharing isn’t an input. But I do think we could find an existing HTML element with an attribute that currently accepts a list of possible values. Adding one more value to that list feels like an inexpensive move.

Here’s what I suggested:

<button type=”share” value=”title,text”>

For non-supporting browsers, it’s a regular button and needs polyfilling, no different to the situation with the JavaScript API. But if supported, no JS needed?

The type attribute of the button element currently accepts three possible values: “submit”, “reset”, or “button”. If you give it any other value, it will behave as though you gave it a type of “submit” or “button” (depending on whether it’s in a form or not) …just like an unknown type value on an input element will behave like “text”.

If a browser supports button type="share”, then when the user clicks on it, the browser can go “Right, I’m handing over to the operating system now.”

There’s still the question of how to pass data to the operating system on what gets shared. Currently the JavaScript API allows you to share any combination of URL, text, and description.

Initially I was thinking that the value attribute could be used to store this data in some kind of key/value pairing, but the more I think about it, the more I think that this aspect should remain the exclusive domain of the JavaScript API. The declarative version could grab the current URL and the value of the page’s title element and pass those along to the operating system. If you need anything more complex than that, use the JavaScript API.

So what I’m proposing is:

<button type="share">

That’s it.

But how would you test for browser support? The same way as you can currently test for supported input types. Make use of the fact that an element’s attribute value and an element’s property value (which 99% of the time are the same), will be different if the attribute value isn’t supported:

var testButton = document.createElement("button");
testButton.setAttribute("type","share");
if (testButton.type != "share") {
// polyfill
}

So that’s my modest proposal. Extend the list of possible values for the type attribute on the button element to include “share” (or something like that). In supporting browsers, it triggers a very bare-bones handover to the OS (the current URL and the current page title). In non-supporting browsers, it behaves like a button currently behaves.

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Netlify redirects and downloads

Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.

Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:

  • Open a text editor and type out HTML and CSS.

Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:

  • Time spent setting up build tools: 00:00
  • Time spent wrangling the pipeline to do exactly what you want: 00:00
  • Time spent trying to get the damn build tools to work again when you return to the project after leaving it alone for more than a few months: 00:00:00

I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.

I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.

Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!

The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.

I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a element with an href attribute.

I throw in one more attribute on that link. The download attribute tells the browser that the URL in the href attribute should be downloaded instead of visited. If you give a value for the download attribute, it will over-ride the file name:

<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>

Or you can use it as a Boolean attribute without any value if you’re happy with the file name:

<a href="/files/nice-file-name.xyz" download>download</a>

There’s one catch though. The download attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com but my audio files are hosted on clearleft-audio.s3.amazonaws.com—the download attribute will be ignored and the mp3 files will play in the browser instead of downloading.

Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.

I added a file called _redirects to the root of my project. It contains one line:

/download/*  https://clearleft-audio.s3.amazonaws.com/podcast/:splat  200

That says that any URLs beginning with /download/ should redirect to clearleft-audio.s3.amazonaws.com/podcast/. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.

Oh, and the 200at the end is the status code: okay.

Now I can use this /download/ path in my link:

<a href="/download/season01episode06.mp3" download>Download mp3</a>

Because this URL on the same origin, the download attribute works just fine.

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Hey now

Progressive enhancement is at the heart of everything I do on the web. It’s the bedrock of my speaking and writing too. Whether I’m writing about JavaScript, Ajax, HTML, or service workers, it’s always through the lens of progressive enhancement. Sometimes I explicitly bang the drum, like with Resilient Web Design. Other times I don’t mention it by name at all, and instead talk only about its benefits.

I sometimes get asked to name some examples of sites that still offer their core functionality even when JavaScript fails. I usually mention Amazon.com, although that has other issues. But quite often I find that a lot of the examples I might mention are dismissed as not being “web apps” (whatever that means).

The pushback I get usually takes the form of “Well, that approach is fine for websites, but it wouldn’t work something like Gmail.”

It’s always Gmail. Which is odd. Because if you really wanted to flummox me with a product or service that defies progressive enhancement, I’d have a hard time with something like, say, a game (although it would be pretty cool to build a text adventure that’s progressively enhanced into a first-person shooter). But an email client? That would work.

Identify core functionality.

Read emails. Write emails.

Make that functionality available using the simplest possible technology.

HTML for showing a list of emails, HTML for displaying the contents of the HTML, HTML for the form you write the response in.

Enhance!

Now add all the enhancements that improve the experience—keyboard shortcuts; Ajax instead of full-page refreshes; local storage, all that stuff.

Can you build something that works just like Gmail without using any JavaScript? No. But that’s not what progressive enhancement is about. It’s about providing the core functionality (reading and writing emails) with the simplest possible technology (HTML) and then enhancing using more powerful technologies (like JavaScript).

Progressive enhancement isn’t about making a choice between using simpler more robust technologies or using more advanced features; it’s about using simpler more robust technologies and then using more advanced features. Have your cake and eat it.

Fortunately I no longer need to run this thought experiment to imagine what it would be like if something like Gmail were built with a progressive enhancement approach. That’s what HEY is.

Sam Stephenson describes the approach they took:

HEY’s UI is 100% HTML over the wire. We render plain-old HTML pages on the server and send them to your browser encoded as text/html. No JSON APIs, no GraphQL, no React—just form submissions and links.

If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.

See? It’s not either resilient or modern—it’s resilient and modern. Have your cake and eat it.

And yet this supremely sensible approach is not considered “modern” web development:

The architecture astronauts who, for the past decade, have been selling us on the necessity of React, Redux, and megabytes of JS, cannot comprehend the possibility of building an email app in 2020 with server-rendered HTML.

HEY isn’t perfect by any means—they’ve got a lot of work to do on their accessibility. But it’s good to have a nice short answer to the question “But what about something like Gmail?”

It reminds me of responsive web design:

When Ethan Marcotte demonstrated the power of responsive design, it was met with resistance. “Sure, a responsive design might work for a simple personal site but there’s no way it could scale to a large complex project.”

Then the Boston Globe launched its responsive site. Microsoft made their homepage responsive. The floodgates opened again.

It’s a similar story today. “Sure, progressive enhancement might work for a simple personal site, but there’s no way it could scale to a large complex project.”

The floodgates are ready to open. We just need you to create the poster child for resilient web design.

It looks like HEY might be that poster child.

I have to wonder if its coincidence or connected that this is a service that’s also tackling ethical issues like tracking? Their focus is very much on people above technology. They’ve taken a human-centric approach to their product and a human-centric approach to web development …because ultimately, that’s what progressive enhancement is.

Accessibility

There’s a new project from Igalia called Open Prioritization:

An experiment in crowd-funding prioritization of new feature implementations for web browsers.

There is some precedent for this. There was a crowd-funding campaign for Yoav Weiss to implement responsive images in Blink a while back. The difference with the Open Prioritization initiative is that it’s also a kind of marketplace for which web standards will get the funding.

Examples include implementing the CSS lab() colour function in Firefox or implementing the :not() pseudo-class in Chrome. There are also some accessibility features like the :focus-visible pseudo-class and the inert HTML attribute.

I must admit, it makes me queasy to see accessibility features go head to head with other web standards. I don’t think a marketplace is the right arena for prioritising accessibility.

I get a similar feeling of discomfort when a presentation or article on accessibility spends a fair bit of time describing the money that can be made by ensuring your website is accessible. I mean, I get it: you’re literally leaving money on the table if you turn people away. But that’s not the reason to ensure your website is accessible. The reason to ensure that your website is accessible is that it’s the right thing to do.

I know that people are uncomfortable with moral arguments, but in this case, I believe it’s important that we keep sight of that.

I understand how it’s useful to have the stats and numbers to hand should you need to convince a sociopath in your organisation, but when numbers are used as the justification, you’re playing the numbers game from then on. You’ll probably have to field questions like “Well, how many screen reader users are visiting our site anyway?” (To which the correct answer is “I don’t know and I don’t care”—even if the number is 1, the website should still be accessible because it’s the right thing to do.)

It reminds of when I was having a discussion with a god-bothering friend of mine about the existence or not of a deity. They made the mistake of trying to argue the case for God based on logic and reason. Those arguments didn’t hold up. But had they made their case based on the real reason for their belief—which is faith—then their position would have been unassailable. I literally couldn’t argue against faith. But instead, by engaging in the rules of logic and reason, they were applying the wrong justification to their stance.

Okay, that’s a bit abstract. How about this…

In a similar vein to talks or articles about accessibility, talks or articles about diversity often begin by pointing out the monetary gain to be had. It’s true. The data shows that companies that are more diverse are also more profitable. But again, that’s not the reason for having a diverse group of people in your company. The reason for having a diverse group of people in your company is that it’s the right thing to do. If you tie the justification for diversity to data, then what happens should the data change? If a new study showed that diverse companies were less profitable, is that a reason to abandon diversity? Absolutely not! If your justification isn’t tied to numbers, then it hardly matters what the numbers say (though it does admitedly feel good to have your stance backed up).

By the way, this is also why I don’t think it’s a good idea to “sell” design systems on the basis of efficiency and cost-savings if the real reason you’re building one is to foster better collaboration and creativity. The fundamental purpose of a design system needs to be shared, not swapped out based on who’s doing the talking.

Anyway, back to accessibility…

A marketplace, to me, feels like exactly the wrong kind of place for accessibility to defend its existence. By its nature, accessibility isn’t a mainstream issue. I mean, think about it: it’s good that accessibility issues affect a minority of people. The fewer, the better. But even if the number of people affected by accessibility were to trend downwards and dwindle, the importance of accessibility should remain unchanged. Accessibility is important regardless of the numbers.

Look, if I make a website for a client, I don’t offer accessibility as a line item with a price tag attached. I build in accessibility by default because it’s the right thing to do. The only way to ensure that accessibility doesn’t get negotiated away is to make sure it’s not up for negotiation.

So that’s why I feel uncomfortable seeing accessibility features in a popularity contest.

I think that markets are great. I think competition is great. But I don’t think it works for everything (like, could you imagine applying marketplace economics to healthcare or prisons? Nightmare!). I concur with Iain M. Banks:

The market is a good example of evolution in action; the try-everything-and-see-what- -works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources.

If Igalia or Mozilla or Google or Apple implement an accessibility feature because they believe that accessibility is important and deserves prioritisation, that’s good. If they implement the same feature just because it received a lot of votes …that doesn’t strike me as a good thing.

I guess it doesn’t matter what the reason is as long as the end result is the same, right? But I suspect that what we’ll see is that the accessibility features up for bidding on Open Prioritization won’t be the winners.

Putting design principles into action

I was really looking forward to speaking at An Event Apart this year. I was going to be on the line-up for Seattle, Boston, and Minneapolis; three cities I really like.

At the start of the year, I decided to get a head-start on my new talk so I wouldn’t be too stressed out when the first event approached. I spent most of January and February going through the chaotic process of assembling a semi-coherent presentation out of a katamari of vague thoughts.

I was making good progress. Then The Situation happened. One by one, the in-person editions of An Event Apart were cancelled (quite rightly). But my talk preparation hasn’t been in vain. I’ll be presenting my talk at an online edition of An Event Apart on Monday, August 17th.

You should attend. Not for my talk, but for Ire’s talk on Future-Proof CSS which sounds like it was made for me:

In this talk, we’ll cover how to write CSS that stands the test of time. From progressive enhancement techniques to accessibility considerations, we’ll learn how to write CSS for 100 years in the future (and, of course, today).

My talk will be about design principles …kinda. As usual, it will be quite a rambling affair. At this point I almost take pride in evoking a reaction of “where’s he going with this?” during the first ten minutes of a talk.

When I do actually get around to the point of the talk—design principles—I ask whether it’s possible to have such a thing as universal principles. After all, the whole point of design principles is that they’re specific to an endeavour, whether that’s a company, an organisation, or a product.

I think that some principles are, if not universal, then at least very widely applicable. I’ve written before about two of my favourites: the robustness principle and the principle of least power:

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

What’s interesting about both of those principles is that they are imperative. They tell you how to act:

Be conservative in what you send, be liberal in what you accept.

Choose the least powerful language suitable for a given purpose.

Other princples are imperative, but they tell you what not to do. Take the razors of Occam and Hanlon, for example:

Entities are not to be multiplied without necessity.

Never attribute to malice that which is adequately explained by stupidity.

But these imperative principles are exceptions. The vast majority of “universal” principles take the form of laws that are observations. They describe the state of the world without providing any actions to take.

There’s Hofstadter’s Law, for example:

It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Or Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

By themselves, these observational laws are interesting but they leave it up to you to decide on a course of action. On the other hand, imperative principles tell you what to do but don’t tell you why.

It strikes me that it could be fun (and useful) to pair up observational and imperative principles:

Because of observation A, apply action B.

For example:

Because of Murphy’s Law, apply the principle of least power.

Or in its full form:

Because anything that can go wrong will go wrong, choose the least powerful language suitable for a given purpose.

I feel like the Jevons paradox is another observational principle that should inform our work on the web:

The Jevons paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises because of increasing demand.

For example, even though devices, browsers, and networks are much, much better now than they were, say, ten years ago, that doesn’t mean that websites have become better or faster. Instead, it’s precisely because there’s more power available that people think nothing of throwing megabytes of JavaScript at users. See Scott’s theory that 5G Will Definitely Make the Web Slower, Maybe:

JavaScript size has ballooned as networks have improved.

This problem would be addressed if web developers were more conservative in what they sent. The robustness principle in action.

Because of the Jevons paradox, apply the robustness principle.

Admittedly, the expanded version of that is far too verbose:

Because technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises because of increasing demand, be conservative in what you send, be liberal in what you accept.

I’m sure there are more and better pairings to be made: an observational principle to tell you why you should take action, and an imperative principle to tell you what action you should take.

Custom properties

I made the website for the Clearleft podcast last week. The design is mostly lifted straight from the rest of the Clearleft website. The main difference is the masthead. If the browser window is wide enough, there’s a background image on the right hand side.

I mostly added that because I felt like the design was a bit imbalanced without something there. On the home page, it’s a picture of me. Kind of cheesy. But the image can be swapped out. On other pages, there are different photos. All it takes is a different class name on that masthead.

I thought about having the image be completely random (and I still might end up doing this). I’d need to use a bit of JavaScript to choose a class name at random from a list of possible values. Something like this:

var names = ['jeremy','katie','rich','helen','trys','chris'];
var name = names[Math.floor(Math.random() * names.length)];
document.querySelector('.masthead').classList.add(name);

(You could paste that into the dev tools console to see it in action on the podcast site.)

Then I read something completely unrelated. Cassie wrote a fantastic article on her site called Making lil’ me - part 1. In it, she describes how she made the mouse-triggered animation of her avatar in the footer of her home page.

It’s such a well-written technical article. She explains the logic of what she’s doing, and translates that logic into code. Then, after walking you through the native code, she shows how you could use the Greeksock library to achieve the same effect. That’s the way to do it! Instead of saying, “Here’s a library that will save you time—don’t worry about how it works!”, she’s saying “Here’s it works without a library; here’s how it works with a library; now you can make an informed choice about what to use.” It’s a very empowering approach.

Anyway, in the article, Cassie demonstrates how you can use custom properties as a bridge between JavaScript and CSS. JavaScript reads the mouse position and updates some custom properties accordingly. Those same custom properties are used in CSS for positioning. Voila! Now you’ve got the position of an element responding to mouse movements.

That’s what made me think of the code snippet I wrote above to update a class name from JavaScript. I automatically thought of updating a class name because, frankly, that’s how I’ve always done it. I’d say about 90% of the DOM scripting I’ve ever done involves toggling the presence of class values: accordions, fly-out menus, tool-tips, and other progressive disclosure patterns.

That’s fine. But really, I should try to avoid touching the DOM at all. It can have performance implications, possibly triggering unnecessary repaints and reflows.

Now with custom properties, there’s a direct line of communication between JavaScript and CSS. No need to use the HTML as a courier.

This made me realise that I need to be aware of automatically reaching for a solution just because that’s the way I’ve done something in the past. I should step back and think about the more efficient solutions that are possible now.

It also made me realise that “CSS variables” is a very limiting way of thinking about custom properties. The fact that they can be updated in real time—in CSS or JavaScript—makes them much more powerful than, say, Sass variables (which are more like constants).

But I too have been guilty of underselling them. I almost always refer to them as “CSS custom properties” …but a lot of their potential comes from the fact that they’re not confined to CSS. From now on, I’m going to try calling them custom properties, without any qualification.