Journal tags: browsers

117

sparkline

Service worker weirdness in Chrome

I think I’ve found some more strange service worker behaviour in Chrome.

It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.

I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch attempt being logged. The only time my service worker script logs anything is in the catch clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:

The service worker navigation preload request failed with a network error.

But all my pages were loading just fine. So where was the error coming from?

After a lot of spelunking and debugging, I think I’ve figured out what’s happening…

First of all, I’m making use of navigation preloads in my service worker. That’s all fine.

Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url. If someone adds the site to their home screen, this is the URL that will open.

Thirdly, Google recently announced that they’re tightening up the criteria for displaying install prompts for progressive web apps. If there’s no network connection, the site still needs to return a 200 OK response: either a cached copy of the URL or a custom offline page.

So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.

I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url value of /, I’ve set a start_url value of /?homescreen. And when the error shows up in the console, the URL being fetched is /?homescreen.

Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.

I thought I’d share this just in case anyone else is logging errors to the console in the catch clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.

Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:

DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals

Much better!

When service workers met framesets

Oh boy, do I have some obscure browser behaviour for you!

To set the scene…

I’ve been writing here in my online journal for almost twenty years. The official anniversary will be on September 30th. But this website has been even online longer than that, just in a very different form.

Here’s the first version of adactio.com.

Like a tour guide taking you around the ruins of some lost ancient civilisation, let me point out some interesting features:

  • Observe the .shtml file extension. That means it was once using Apache’s server-side includes, a simple way of repeating chunks of markup across pages. Scientists have been trying to reproduce the wisdom of the ancients using modern technology ever since.
  • See how the layout is 100vw and 100vh? Well, this was long before viewport units existed. In fact there is no CSS at all on that page. It’s one big table element with 100% width and 100% height.
  • So if there’s no CSS, where is the border-radius coming from? Let me introduce you to an old friend—the non-animated GIF. It’s got just enough transparency (though not proper alpha transparency) to fake rounded corners between two solid colours.
  • The management takes no responsibility for any trauma that might befall you if you view source. There you will uncover JavaScript from the dawn of time; ancient runic writing like if (navigator.appName == "Netscape")

Now if your constitution was able to withstand that, brace yourself for what happens when you click on either of the two links, deutsch or english.

You find yourself inside a frameset. You may also experience some disorienting “DHTML”—the marketing term given to any combination of JavaScript and positioning in the late ’90s.

Note that these are not iframes, they are frames. Different thing. You could create single page apps long before Ajax was a twinkle in Jesse James Garrett’s eye.

If you view source, you’ll see a React-like component system. Each frameset component contains frame components that are isolated from one another. They’re like web components. Each frame has its own (non-shadow) DOM. That’s because each frame is actually a separate web page. If you right-click on any of the frames, your browser should give the option to view the framed document in its own tab or window.

Now for the part where modern and ancient technologies collide…

If you’re looking at the frameset URL in Firefox or Safari, everything displays as it should in all its ancient glory. But if you’re looking in Google Chrome and you’ve visited adactio.com before, something very odd happens.

Each frame of the frameset displays my custom offline page. The only way that could be served up is through my service worker script. You can verify this by opening the framest URL in an incognito window—everything works fine when no service worker has been registered.

I have no idea why this is happening. My service worker logic is saying “if there’s a request for a web page, try fetching it from the network, otherwise look in the cache, otherwise show an offline page.” But if those page requests are initiated by a frame element, it goes straight to showing the offline page.

Is this a bug? Or perhaps this is the correct behaviour for some security reason? I have no idea.

I wonder if anyone has ever come across this before. It’s a very strange combination of factors:

  • a domain served over HTTPS,
  • that registers a service worker,
  • but also uses framesets and frames.

I could submit a bug report about this but I fear I would be laughed out of the bug tracker.

Still …the World Wide Web is remarkable for its backward compatibility. This behaviour is unusual because browser makers are at pains to support existing content and never break the web.

Technically a modern website (one that registers a service worker) shouldn’t be using deprecated technology like frames. But browsers still need to be able support those old technologies in order to render old websites.

This situation has only arisen because the same domain—adactio.com—is host to a modern website and a really old one.

Maybe Chrome is behaving strangely because I’ve built my online home on ancient burial ground.

Update: Both Remy and Jake did some debugging and found the issue…

It’s all to do with navigation preloads and the value of event.preloadResponse, which I believe is only supported in Chrome which would explain the differences between browsers.

According to this post by Jake:

event.preloadResponse is a promise that resolves with a response, if:

  • Navigation preload is enabled.
  • The request is a GET request.
  • The request is a navigation request (which browsers generate when they’re loading pages, including iframes).

Otherwise event.preloadResponse is still there, but it resolves with undefined.

Notice that iframes are mentioned, but not frames.

My code was assuming that if event.preloadRepsonse exists in my block of code for responding to page requests, then there’d be a response. But if the request was initiated from a frameset, it is a request for a page and event.preloadRepsonse does exist …but it’s undefined.

I’ve updated my code now to check this assumption (and fall back to fetch).

This may technically still be a bug though. Shouldn’t a page loaded from a frameset count as a navigation request?

Authentication

Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.

The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:

  • Something you know (like a password),
  • Something you have (like a phone or a USB key),
  • Something you are (biometric Black Mirror shit).

Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).

None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.

The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).

Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.

Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).

There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.

In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.

But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.

With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.

If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete attribute with a value of “one-time-code”. For a six-digit passcode, your input element might look something like this:

<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">

With one small addition to one HTML element, you’ve saved users some tedious drudgery.

There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.

Let’s say your website is example.com and the text message you send reads:

Your one-time passcode is 123456.

Add this to the end of the text message:

@example.com #123456

So the full message reads:

Your one-time passcode is 123456.

@example.com #123456

The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code" in your form and the user shouldn’t have to lift a finger.

I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.

It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.

Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known directory on web servers.

You can add a URL for /.well-known/change-password which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.

Oh, and on that page where users can update their password, the autocomplete attribute is your friend again:

<input type="password" autocomplete="new-password">

If you want them to enter their current password first, use this:

<input type="password" autocomplete="current-password">

All of the things I’ve mentioned—the autocomplete attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Audio

I spent the last couple of weekends rolling out a new feature on The Session. It involves playing audio in a web page. No big deal these days, right? But the history involves some old file formats…

The first venerable format is ABC notation. File extension: .abc, mime type: text/vnd.abc. It’s an ingenious text format for musical notation using ASCII. The metadata of the piece of music is defined in JSON-like key/value pairs. Then the contents are encoded with letters: A, B, C, etc. Uppercase and lowercase denote different octaves. Numbers can be used for note lengths.

The format was created by Chris Walshaw in 1997 when dial-up was the norm. With ABC, people were able to swap tunes on email lists or bulletin boards without transferring weighty image or sound files. If you had ABC software on your computer, you could convert that lightweight text file into sheet music …or audio.

That brings me to the second old format: midi files. File extension: .mid, mime-type: audio/midi. Like ABC, it’s a lightweight format for encoding the instructions for music instead of the music itself.

Think of it like SVG: instead of storing the final pixels of an image, SVG stores the instructions for drawing the image instead. The instructions in a midi file are like “play this note for this long on this instrument.” Again, as with ABC, you need some software to turn the instructions into sound.

There was a time when lots of software could play midi files. Quicktime on the Mac, for example. You could even embed midi files in web pages. I mean literally embed them …with the embed element. No Geocities page was complete without an autoplaying midi file.

On The Session, people submit tunes in ABC format. Then, using the amazing ABCJS JavaScript library, the ABC is turned into SVG on the fly! For years I’ve also offered midi files, generated on the server from the ABC notation.

But times have changed. These days it’s hard to find software that plays midi files. Quicktime doesn’t do it anymore. And you’d need to go to the app store on iOS to find a midi file player. It’s time to phase out the midi files on The Session.

I still want to provide automatically-generated audio though. Fortunately ABCJS gives me a way to do this. But instead of using the old technology of midi files, it uses a more modern browser feature: the Web Audio API.

The end result sounds like a midi file, but the underlying technique is more like a synthesiser. There’s a separate mp3 file for each note. The JavaScript figures out how long each “sample” needs to be played for, strings them all together, and outputs them with Web Audio. So you’ve got cutting-edge browser technology recreating a much older file format. Paul Rosen—the creator of ABCJS—has a presentation explaining how it all works under the hood.

Not only is there a separate short mp3 file for each note in seven octaves, but if you want the sound of a different instrument, you need samples for all seven octaves in that instrument. They’re called soundfonts.

Paul provides soundfonts for ABCJS. It’s a repo that was forked from this repo from Benjamin Gleitzman. And here’s where it gets small worldy…

The reason why Benjamin has a repo of soundfonts is because he needed to create midi-like audio in the browser. He wanted to do this for a project on September 28th and 29th, 2013 …at Science Hack Day San Francisco!

I was there too—working on my own audio-related hack—and I remember the excellent (and winning) hack that Benjamin worked on. It was called Symphony of Satellites and it’s still online along with the promo video. Here’s Benjamin’s post-hackday write-up from seven years ago.

It’s rare that the worlds of the web and Irish music cross over. When I got to meet Paul—creator of ABCJS—at a web conference a couple of years ago it kind of blew my mind. Last weekend when I set out to dabble with a feature on The Session, I certainly didn’t expect to stumble on a connection to Science Hack Day! (Aside: the first Science Hack Day was ten years ago—yowzers!)

Anyway, I was able to get that audio playback working on The Session. Except for some weirdness on iOS that I had to fix. But that’s a hack for another day.

Lists

We often have brown bag lunchtime presentations at Clearleft. In the Before Times, this would involve a trip to Pret or Itsu to get a lunch order in, which we would then proceed to eat in front of whoever was giving the presentation. Often it’s someone from Clearleft demoing something or playing back a project, but whenever possible we’d rope in other people to swing by and share what they’re up to.

We’ve continued this tradition since making the switch to working remotely. Now the brown bag presentations happen over Zoom. This has two advantages. Firstly, if you don’t want the presenter watching you eat your lunch, you can switch your camera off. Secondly, because the presenter doesn’t have to be in Brighton, there’s no geographical limit on who could present.

Our most recent brown bag was truly excellent. I asked Léonie if she’d be up for it, and she very kindly agreed. As well as giving us a whirlwind tour of how assistive technology works on the web, she then invited us to observe her interacting with websites using a screen reader.

I’ve seen Léonie do this before and it’s always struck me as a very open and vulnerable thing to do. Think about it: the audience has more information than the presenter. We can see the website at the same time as we’re listening to Léonie and her screen reader.

We got to nominate which websites to visit. One of them—a client’s current site that we haven’t yet redesigned—was a textbook example of how important form controls are. There was a form where almost everything was hunky-dory: form fields, labels, it was all fine. But one of the inputs was a combo box. Instead of using a native select with a datalist, this was made with JavaScript. Because it was lacking the requisite ARIA additions to make it accessible, it was pretty much unusable to Léonie.

And that’s why you use the right HTML element wherever possible, kids!

The other site Léonie visited was Clearleft’s own. That was all fine. Léonie demonstrated how she’d form a mental model of a page by getting the screen reader to read out the headings. Interestingly, the nesting of headings on the Clearleft site is technically wrong—there’s a jump from an h1 to an h3—probably a result of the component-driven architecture where you don’t quite know where in the page a heading will appear. But this didn’t seem to be an issue. The fact that headings are being used at all was the more important fact. As Léonie said, there’s a lot of incorrect HTML out there so it’s no wonder that screen readers aren’t necessarily sticklers for nesting.

I’ve said it before and I’ll say it again: if you’re using headings, labelling form fields, and providing alternative text for images, you’re already doing a better job than most websites.

Headings weren’t the only way that Léonie got a feel for the page architecture. Landmark roles—like header and nav—really helped too. Inside the nav element, she also heard how many items there were. That’s because the navigation was marked up as a list: “List: six items.”

And that reminded me of the Webkit issue. On Webkit browsers like Safari, the list on the Clearleft site would not be announced as a list. That’s because the lists’s bullets have been removed using CSS.

Now this isn’t the only time that screen readers pay attention to styling. If you use display: none to hide an element from sight, it will also be unavailable to screen reader users. Makes sense. But removing the semantic meaning of lists based on CSS? That seems a bit much.

There are good reasons for it though. Here’s a thread from James Craig on where this decision came from (James, by the way, is an absolute unsung hero of accessibility). It turns out that developers went overboard with lists a while back and that’s why we can’t have nice things. In over-compensating from divitis, developers ended up creating listitis, marking up anything vaguely list-like as an unordered list with styling adjusted. That was very annoying for screen reader users trying to figure out what was actually a list.

And James also asks:

If a sighted user doesn’t need to know it’s a list, why would a screen reader user need to know or want to know? Stated another way, if the visible list markers (bullets, image markers, etc.) are deemed by the designers to be visually burdensome or redundant for sighted users, why burden screen reader users with those semantics?

That’s a fair point, but the thing is …bullets maketh not the list. There are many ways of styling something that is genuinely a list that doesn’t involve bullets or image markers. White space, borders, keylines—these can all indicate visually that something is a list of items.

If you look at, say, the tunes page on The Session, you can see that there are numerous lists—newest tunes, latest comments, etc. In this case, as a sighted visitor, you would be at an advantage over a screen reader user in that you can, at a glance, see that there’s a list of five items here, a list of ten items there.

So I’m not disagreeing with the thinking behind the Webkit decision, but I do think the heuristics probably aren’t going to be quite good enough to make the call on whether something is truly a list or not.

Still, while I used to be kind of upset about the Webkit behaviour, I’ve become more equanimous about it over time. There are two reasons for this.

Firstly, there’s something that Eric said:

We have come so far to agree that websites don’t need to look the same in every browser mostly due to bugs in their rendering engines or preferences of the user.

I think the same is true for screen readers and other assistive technology: Websites don’t need to sound the same in every screen reader.

That’s a really good point. If we agree that “pixel perfection” isn’t attainable—or desirable—in a fluid, user-centred medium like the web, why demand the aural equivalent?

The second reason why I’m not storming the barricades about this is something that James said:

Of course, heuristics are imperfect, so authors have the ability to explicitly override the heuristically determined role by adding role="list”.

That means more work for me as a developer, and that’s …absolutely fine. If I can take something that might be a problem for a user, and turn into something that’s a problem for me, I’ll choose to make it my problem every time.

I don’t have to petition Webkit to change their stance or update their heuristics. If I feel strongly that a list styled without bullets should still be announced as a list, I can specificy that in the markup.

It does feel very redundant to write ul role="list”. The whole point of having HTML elements with built-in semantics is that you don’t need to add any ARIA roles. But we did it for a while when new structural elements were introduced in HTML5—main role="main", nav role="navigation", etc. So I’m okay with a little bit of redundancy. I think the important thing is that you really stop and think about whether something should be announced as a list or not, regardless of styling. There isn’t a one-size-fits-all answer (hence why it’s nigh-on impossible to get the heuristics right). Each list needs to be marked up on a case-by-case basis.

And I wouldn’t advise spending too much time thinking about this either. There are other, more important areas to consider. Like I said, headings, forms, and images really matter. I’d prioritise those elements above thinking about lists. And it’s worth pointing out that Webkit doesn’t remove all semantic meaning from styled lists—it updates the role value from list to group. That seems sensible to me.

In the case of that page on The Session, I don’t think I’m guilty of listitis. Yes, there are seven lists on that page (two for navigation, five for content) but I’m reasonably confident that they all look like lists even without bullets or markers. So I’ve added role="list" to some ul elements.

As with so many things related to accessibility—and the web in general—this is a situation where the only answer I can confidentally come up with is …it depends.

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

Caching and storing

When I was speaking at conferences last year about service workers, I’d introduce the Cache API. I wanted some way of explaining the difference between caching and other kinds of storage.

The way I explained was that, while you might store stuff for a long time, you’d only cache stuff that you knew you were going to need again. So according to that definition, when you make a backup of your hard drive, that’s not caching …becuase you hope you’ll never need to use the backup.

But that explanation never sat well with me. Then more recently, I was chatting with Amber about caching. Once again, we trying to define the difference between, say, the Cache API and things like LocalStorage and IndexedDB. At some point, we realised the fundamental difference: caches are for copies.

Think about it. If you store something in LocalStorage or IndexedDB, that’s the canonical home for that data. But anything you put into a cache must be a copy of something that exists elsewhere. That’s true of the Cache API, the browser cache, and caches on the server. An item in one of those caches is never the original—it’s always a copy of something that has a canonical home elsewhere.

By that definition, backing up your hard drive definitely is caching.

Anyway, I was glad to finally have a working definition to differentiate between caching and storing.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Portals and giant carousels

I posted something recently that I think might be categorised as a “shitpost”:

Most single page apps are just giant carousels.

Extreme, yes, but perhaps there’s a nugget of truth to it. And it seemed to resonate:

I’ve never actually seen anybody justify SPA transitions with actual business data. They generally don’t seem to increase sales, conversion, or retention.

For some reason, for SPAs, managers are all of a sudden allowed to make purely emotional arguments: “it feels snappier”

If businesses were run rationally, when somebody asks for an order of magnitude increase in project complexity, the onus would be on them to prove that it proportionally improves business results.

But I’ve never actually seen that happen in a software business.

A single page app architecture makes a lot of sense for interaction-heavy sites with lots of state to maintain, like twitter.com. But I’ve seen plenty of sites built as single page apps even though there’s little to no interactivity or state management. For some people, it’s the default way of building anything on the web, even a brochureware site.

It seems like there’s a consensus that single page apps may have long initial loading times, but then they have quick transitions between “pages” …just like a carousel really. But I don’t know if that consensus is based on reality. Whether you’re loading a page of HTML or loading a chunk of JSON, you’re still making a network request that will take time to resolve.

The argument for loading a chunk of JSON is that you don’t have to make any requests for the associated CSS and JavaScript—they’re already loaded. Whereas if you request a page of HTML, that HTML will also request CSS and JavaScript.

Leaving aside the fact that is literally what the browser cache takes of, I’ve seen some circular reasoning around this:

  1. We need to create a single page app because our assets, like our JavaScript dependencies, are so large.
  2. Why are the JavaScript dependencies so large?
  3. We need all that JavaScript to create the single page app functionlity.

To be fair, in the past, the experience of going from page to page used to feel a little herky-jerky, even if the response times were quick. You’d get a flash of a white blank page between navigations. But that’s no longer the case. Browsers now perform something called “paint holding” which elimates the herky-jerkiness.

So now if your pages are a reasonable size, there’s no practical difference in user experience between full page refreshes and single page app updates. Navigate around The Session if you want to see paint holding in action. Switching to a single page app architecture wouldn’t improve the user experience one jot.

Except…

If I were controlling everything with JavaScript, then I’d also have control over how to transition between the “pages” (or carousel items, if you prefer). There’s currently no way to do that with full page changes.

This is the problem that Jake set out to address in his proposal for navigation transitions a few years back:

Having to reimplement navigation for a simple transition is a bit much, often leading developers to use large frameworks where they could otherwise be avoided. This proposal provides a low-level way to create transitions while maintaining regular browser navigation.

I love this proposal. It focuses on user needs. It also asks why people reach for JavaScript frameworks instead of using what browsers provide. People reach for JavaScript frameworks because browsers don’t yet provide some functionality: components like tabs or accordions; DOM diffing; control over styling complex form elements; navigation transitions. The problems that JavaScript frameworks are solving today should be seen as the R&D departments for web standards of tomorrow. (And conversely, I strongly believe that the aim of any good JavaScript framework should be to make itself redundant.)

I linked to Jake’s excellent proposal in my shitpost saying:

bucketloads of JavaScript wouldn’t be needed if navigation transitions were available in browsers

But then I added—and I almost didn’t—this:

(not portals)

Now you might be asking yourself what Paul said out loud:

Excuse my ignorance but… WTF are portals!?

I replied with a link to the portals proposal and what I thought was an example use case:

Portals are a proposal from Google that would help their AMP use case (it would allow a web page to be pre-rendered, kind of like an iframe).

That was based on my reading of the proposal:

…show another page as an inset, and then activate it to perform a seamless transition to a new state, where the formerly-inset page becomes the top-level document.

It sounded like Google’s top stories carousel. And the proposal goes into a lot of detail around managing cross-origin requests. Again, that strikes me as something that would be more useful for a search engine than a single page app.

But Jake was not happy with my description. I didn’t intend to besmirch portals by mentioning Google AMP in the same sentence, but I can see how the transitive property of ickiness would apply. Because Google AMP is a nasty monopolistic project that harms the web and is an embarrassment to many open web advocates within Google, drawing any kind of comparison to AMP is kind of like Godwin’s Law for web stuff. I know that makes it sounds like I’m comparing Google AMP to Hitler, and just to be clear, I’m not (though I have myself been called a fascist by one of the lead engineers on AMP).

Clearly, emotions run high when Google AMP is involved. I regret summoning its demonic presence.

After chatting with Jake some more, I tried to find a better use case to describe portals. Reading the proposal, portals sound a lot like “spicy iframes”. So here’s a different use case that I ran past Jake: say you’re on a website that has an iframe embedded in it—like a YouTube video, for example. With portals, you’d have the ability to transition the iframe to a fully-fledged page smoothly.

But Jake told me that even though the proposal talks a lot about iframes and cross-origin security, portals are conceptually more like using rel="prerender" …but then having scripting control over how the pre-rendered page becomes the current page.

Put like that, portals sound more like Jake’s original navigation transitions proposal. But I have to say, I never would’ve understood that use case just from reading the portals proposal. I get that the proposal is aimed more at implementators than authors, but in its current form, it doesn’t seem to address the use case of single page apps.

Kenji said:

we haven’t seen interest from SPA folks in portals so far.

I’m not surprised! He goes on:

Maybe, they are happy / benefits aren’t clear yet.

From my own reading of the portals proposal, I think the benefits are definitely not clear. It’s almost like the opposite of Jake’s original proposal for navigation transitions. Whereas as that was grounded in user needs and real-world examples, the portals proposal seems to have jumped to the intricacies of implementation without covering the user needs.

Don’t get me wrong: if portals somehow end up leading to a solution more like Jake’s navigation transitions proposals, then I’m all for that. That’s the end result I care about. I’d love it if people had a lightweight option for getting the perceived benefits of single page apps without the costly overhead in performance that comes with JavaScripting all the things.

I guess the web I want includes giant carousels.

Standards processing

I’ve been like a dog with a bone the way I’ve been pushing for a declarative option for the Web Share API in the shape of button type=“share”. It’s been an interesting window into the world of web standards.

The story so far…

That’s the situation currently. The general consensus seems to be that it’s probably too soon to be talking about implementation at this stage—the Web Share API itself is still pretty new—but gathering data to inform future work is good.

In planning for the next TPAC meeting (the big web standards gathering), Marcos summarised the situation like this:

Not blocking: but a proposal was made by @adactio to come up with a declarative solution, but at least two implementers have said that now is not the appropriate time to add such a thing to the spec (we need more implementation experience + and also to see how devs use the API) - but it would be great to see a proposal incubated at the WICG.

Now this where things can get a little confusing because it used to be that if you wanted to incubate a proposal, you’d have to do on Discourse, which is a steaming pile of crap that requires JavaScript in order to put text on a screen. But Šime pointed out that proposals can now be submitted on Github.

So that’s where I’ve submitted my proposal, linking through to the explainer document.

Like I said, I’m not expecting anything to happen anytime soon, but it would be really good to gather as much data as possible around existing usage of the Web Share API. If you’re using it, or you know anyone who’s using it, please, please, please take a moment to provide a quick description. And if you could help spread the word to get that issue in front of as many devs as possible, I’d be very grateful.

(Many thanks to everyone who’s already contributed to that issue—much appreciated!)

Continuous partial browser support

Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.

That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.

Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path if you want to support Safari. It’s been like that for seven years and counting.

Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.

Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.

The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:

  • Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
  • Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.

I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.

That said…

You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.

And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports in the case of CSS). Try to use the feature as an enhancement rather than a dependency.

If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

You can also do it for well-established features like querySelector, addEventListener, and geolocation. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.

The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.

The reason for a share button type

If you’re at all interested in what I wrote about a declarative Web Share API—and its sequel, a polyfill for button type=”share”—then you might be interested in an explainer document I’ve put together.

It’s a useful exercise for me to enumerate the reasoning for button type=“share” in one place. If you have any feedback, feel free to fork it or create an issue.

The document is based on my initial blog posts and the discussion that followed in this issue on the repo for the Web Share API. In that thread I got some pushback from Marcos. There are three points he makes. I think that two of them lack merit, but the third one is actually spot on.

Here’s the first bit of pushback:

Apart from placing a button in the content, I’m not sure what the proposal offers over what (at least one) browser already provides? For instance, Safari UI already provides a share button by default on every page

But that is addressed in the explainer document for the Web Share API itself:

The browser UI may not always be available, e.g., when a web app has been installed as a standalone/fullscreen app.

That’s exactly what I wanted to address. Browser UI is not always available and as progressive web apps become more popular, authors will need to provide a way for users to share the current URL—something that previously was handled by browsers.

That use-case of sharing the current page leads nicely into the second bit of pushback:

The API is specialized… using it to share the same page is kinda pointless.

But again, the explainer document for the Web Share API directly contradicts this:

Sharing the page’s own URL (a very common case)…

Rather than being a difference of opinion, this is something that could be resolved with data. I’d really like to find out how people are currently using the Web Share API. How much of the current usage falls into the category of “share the current page”? I don’t know the best way to gather this data though. If you have any ideas, let me know. I’ve started an issue where you can share how you’re using the Web Share API. Or if you’re not using the Web Share API, but you know someone who is, please let them know.

Okay, so those first two bits of pushback directly contradict what’s in the explainer document for the Web Share API. The third bit of pushback is more philosophical and, I think, more interesting.

The Web Share API explainer document does a good job of explaining why a declarative solution is desirable:

The link can be placed declaratively on the page, with no need for a JavaScript click event handler.

That’s also my justification for having a declarative alternative: it would be easier for more people to use. I said:

At a fundamental level, declarative technologies have a lower barrier to entry than imperative technologies.

But Marcos wrote:

That’s demonstrably false and a common misconception: See OWL, XForms, SVG, or any XML+namespace spec. Even HTML is poorly understood, but it just happens to have extremely robust error recovery (giving the illusion of it being easy). However, that’s not a function of it being “declarative”.

He’s absolutely right.

It’s not so much that I want a declarative option—I want an option that has robust error recovery. After all, XML is a declarative language but its error handling is as strict as an imperative language like JavaScript: make one syntactical error and nothing works. XML has a brittle error-handling model by design. HTML and CSS have extremely robust error recovery by design. It’s that error-handling model that gives HTML and CSS their robustness.

I’ve been using the word “declarative” when I actually meant “robust in handling errors”.

I guess that when I’ve been talking about “a declarative solution”, I’ve been thinking in terms of the three languages parsed by browsers: HTML, CSS, and JavaScript. Two of those languages are declarative, and those two also happen to have much more forgiving error-handling than the third language. That’s the important part—the error handling—not the fact that they’re declarative.

I’ve been using “declarative” as a shorthand for “either HTML or CSS”, but really I should try to be more precise in my language. The word “declarative” covers a wide range of possible languages, and not all of them lower the barrier to entry. A declarative language with a brittle error-handling model is as daunting as an imperative language.

I should try to use a more descriptive word than “declarative” when I’m describing HTML or CSS. Resilient? Robust?

With that in mind, button type=“share” is worth pursuing. Yes, it’s a declarative option for using the Web Share API, but more important, it’s a robust option for using the Web Share API.

I invite you to read the explainer document for a share button type and I welcome your feedback …especially if you’re currently using the Web Share API!

Web browsers on iOS

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to. The app store doesn’t allow other browsers to be listed. The apps called Chrome and Firefox are little more than skinned versions of Safari.

If you’re a web developer, there are two possible reactions to hearing this. One is “Duh! Everyone knows that!”. The other is “What‽ I never knew that!”

If you fall into the first category, I’m guessing you’ve been a web developer for a while. The fact that Safari is the only browser on iOS devices is something you’ve known for years, and something you assume everyone else knows. It’s common knowledge, right?

But if you’re relatively new to web development—heck, if you’ve been doing web development for half a decade—you might fall into the second category. After all, why would anyone tell you that Safari is the only browser on iOS? It’s common knowledge, right?

So that’s the situation. Safari is the only browser that can run on iOS. The obvious follow-on question is: why?

Apple at this point will respond with something about safety and security, which are certainly important priorities. So let me rephrase the question: why on iOS?

Why can I install Chrome or Firefox or Edge on my Macbook running macOS? If there are safety or security reasons for preventing me from installing those browsers on my iOS device, why don’t those same concerns apply to my macOS device?

At one time, the mobile operating system—iOS—was quite different to the desktop operating system—OS X. Over time the gap has narrowed. At this point, the operating systems are converging. That makes sense. An iPhone, an iPad, and a Macbook aren’t all that different apart from the form factor. It makes sense that computing devices from the same company would share an underlying operating system.

As this convergence continues, the browser question is going to have to be decided in one direction or the other. As it is, Apple’s laptops and desktops strongly encourage you to install software from their app store, though it is still possible to install software by other means. Perhaps they’ll decide that their laptops and desktops should only be able to install software from their app store—a decision they could justify with safety and security concerns.

Imagine that situation. You buy a computer. It comes with one web browser pre-installed. You can’t install a different web browser on your computer.

You wouldn’t stand for it! I mean, Microsoft got fined for anti-competitive behaviour when they pre-bundled their web browser with Windows back in the 90s. You could still install other browsers, but just the act of pre-bundling was seen as an abuse of power. Imagine if Windows never allowed you to install Netscape Navigator?

And yet that’s exactly the situation in 2020.

You buy a computing device from Apple. It might be a Macbook. It might be an iPad. It might be an iPhone. But you can only install your choice of web browser on one of those devices. For now.

It is contradictory. It is hypocritical. It is indefensible.

A polyfill for button type=”share”

After writing about a declarative Web Share API here yesterday I thought I’d better share the idea (see what I did there?).

I opened an issue on the Github repo for the spec.

(I hope that’s the right place for this proposal. I know that in the past ideas were kicked around on the Discourse site for Web platform Incubator Community Group but I can’t stand Discourse. It literally requires JavaScript to render anything to the screen even though the entire content is text. If it turns out that that is the place I should’ve posted, I guess I’ll hold my nose and do it using the most over-engineered reinvention of the browser I’ve ever seen. But I believe that the plan is for WICG to migrate proposals to Github anyway.)

I also realised that, as the JavaScript Web Share API already exists, I can use it to polyfill my suggestion for:

<button type="share">

The polyfill also demonstrates how feature detection could work. Here’s the code.

This polyfill takes an Inception approach to feature detection. There are three nested levels:

  1. This browser supports button type="share". Great! Don’t do anything. Otherwise proceed to level two.
  2. This browser supports the JavaScript Web Share API. Use that API to share the current page URL and title. Otherwise proceed to level three.
  3. Use a mailto: link to prefill an email with the page title as the subject and the URL in the body. Ya basic!

The idea is that, as long as you include the 20 lines of polyfill code, you could start using button type="share" in your pages today.

I’ve made a test page on Codepen. I’m just using plain text in the button but you could use a nice image or SVG or combination. You can use the Codepen test page to observe two of the three possible behaviours browsers could exhibit:

  1. A browser supports button type="share". Currently that’s none because I literally made this shit up yesterday.
  2. A browser supports the JavaScript Web Share API. This is Safari on Mac, Edge on Windows, Safari on iOS, and Chrome, Samsung Internet, and Firefox on Android.
  3. A browser supports neither button type="share" nor the existing JavaScript Web Share API. This is Firefox and Chrome on desktop (and Edge if you’re on a Mac).

See the Pen Polyfill for button type=”share" by Jeremy Keith (@adactio) on CodePen.

The polyfill doesn’t support Internet Explorer 11 or lower because it uses the DOM closest() method. Feel free to fork and rewrite if you need to support old IE.

A declarative Web Share API

I’ve written about the Web Share API before. It’s a handy little bit of JavaScript that—if supported—brings up a system-level way of sharing a page. Seeing as it probably won’t be long before we won’t be able to see full URLs in browsers anymore, it’s going to fall on us as site owners to provide this kind of fundamental access.

Right now the Web Share API exists entirely in JavaScript. There are quite a few browser APIs like that, and it always feels like a bit of a shame to me. Ideally there should be a JavaScript API and a declarative option, even if the declarative option isn’t as powerful.

Take form validation. To cover the most common use cases, you probably only need to use declarative markup like input type="email" or the required attribute. But if your use case gets a bit more complicated, you can reach for the Constraint Validation API in JavaScript.

I like that pattern. I wish it were an option for JavaScript-only APIs. Take the Geolocation API, for example. Right now it’s only available through JavaScript. But what if there were an input type="geolocation" ? It wouldn’t cover all use cases, but it wouldn’t have to. For the simple case of getting someone’s location (like getting someone’s email or telephone number), it would do. For anything more complex than that, that’s what the JavaScript API is for.

I was reminded of this recently when Ada Rose Cannon tweeted:

It really feels like there should be a semantic version of the share API, like a mailto: link

I think she’s absolutely right. The Web Share API has one primary use case: let the user share the current page. If that use case could be met in a declarative way, then it would have a lower barrier to entry. And for anyone who needs to do something more complicated, that’s what the JavaScript API is for.

But Pete LePage asked:

How would you feature detect for it?

Good question. With the JavaScript API you can do feature detection—if the API isn’t supported you can either bail or provide your own implementation.

There a few different ways of extending HTML that allow you to provide a fallback for non-supporting browsers.

You could mint a new element with a content model that says “Browsers, if you do support this element, ignore everything inside the opening and closing tags.” That’s the way that the canvas element works. It’s the same for audio and video—they ignore everything inside them that isn’t a source element. So developers can provide a fallback within the opening and closing tags.

But forging a new element would be overkill for something like the Web Share API (or Geolocation). There are more subtle ways of extending HTML that I’ve already alluded to.

Making a new element is a lot of work. Making a new attribute could also be a lot of work. But making a new attribute value might hit the sweet spot. That’s why I suggested something like input type="geolocation" for the declarative version of the Geolocation API. There’s prior art here; this is how we got input types for email, url, tel, color, date, etc. The key piece of behaviour is that non-supporting browsers will treat any value they don’t understand as “text”.

I don’t think there should be input type="share". The action of sharing isn’t an input. But I do think we could find an existing HTML element with an attribute that currently accepts a list of possible values. Adding one more value to that list feels like an inexpensive move.

Here’s what I suggested:

<button type=”share” value=”title,text”>

For non-supporting browsers, it’s a regular button and needs polyfilling, no different to the situation with the JavaScript API. But if supported, no JS needed?

The type attribute of the button element currently accepts three possible values: “submit”, “reset”, or “button”. If you give it any other value, it will behave as though you gave it a type of “submit” or “button” (depending on whether it’s in a form or not) …just like an unknown type value on an input element will behave like “text”.

If a browser supports button type="share”, then when the user clicks on it, the browser can go “Right, I’m handing over to the operating system now.”

There’s still the question of how to pass data to the operating system on what gets shared. Currently the JavaScript API allows you to share any combination of URL, text, and description.

Initially I was thinking that the value attribute could be used to store this data in some kind of key/value pairing, but the more I think about it, the more I think that this aspect should remain the exclusive domain of the JavaScript API. The declarative version could grab the current URL and the value of the page’s title element and pass those along to the operating system. If you need anything more complex than that, use the JavaScript API.

So what I’m proposing is:

<button type="share">

That’s it.

But how would you test for browser support? The same way as you can currently test for supported input types. Make use of the fact that an element’s attribute value and an element’s property value (which 99% of the time are the same), will be different if the attribute value isn’t supported:

var testButton = document.createElement("button");
testButton.setAttribute("type","share");
if (testButton.type != "share") {
// polyfill
}

So that’s my modest proposal. Extend the list of possible values for the type attribute on the button element to include “share” (or something like that). In supporting browsers, it triggers a very bare-bones handover to the OS (the current URL and the current page title). In non-supporting browsers, it behaves like a button currently behaves.

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Accessibility

There’s a new project from Igalia called Open Prioritization:

An experiment in crowd-funding prioritization of new feature implementations for web browsers.

There is some precedent for this. There was a crowd-funding campaign for Yoav Weiss to implement responsive images in Blink a while back. The difference with the Open Prioritization initiative is that it’s also a kind of marketplace for which web standards will get the funding.

Examples include implementing the CSS lab() colour function in Firefox or implementing the :not() pseudo-class in Chrome. There are also some accessibility features like the :focus-visible pseudo-class and the inert HTML attribute.

I must admit, it makes me queasy to see accessibility features go head to head with other web standards. I don’t think a marketplace is the right arena for prioritising accessibility.

I get a similar feeling of discomfort when a presentation or article on accessibility spends a fair bit of time describing the money that can be made by ensuring your website is accessible. I mean, I get it: you’re literally leaving money on the table if you turn people away. But that’s not the reason to ensure your website is accessible. The reason to ensure that your website is accessible is that it’s the right thing to do.

I know that people are uncomfortable with moral arguments, but in this case, I believe it’s important that we keep sight of that.

I understand how it’s useful to have the stats and numbers to hand should you need to convince a sociopath in your organisation, but when numbers are used as the justification, you’re playing the numbers game from then on. You’ll probably have to field questions like “Well, how many screen reader users are visiting our site anyway?” (To which the correct answer is “I don’t know and I don’t care”—even if the number is 1, the website should still be accessible because it’s the right thing to do.)

It reminds of when I was having a discussion with a god-bothering friend of mine about the existence or not of a deity. They made the mistake of trying to argue the case for God based on logic and reason. Those arguments didn’t hold up. But had they made their case based on the real reason for their belief—which is faith—then their position would have been unassailable. I literally couldn’t argue against faith. But instead, by engaging in the rules of logic and reason, they were applying the wrong justification to their stance.

Okay, that’s a bit abstract. How about this…

In a similar vein to talks or articles about accessibility, talks or articles about diversity often begin by pointing out the monetary gain to be had. It’s true. The data shows that companies that are more diverse are also more profitable. But again, that’s not the reason for having a diverse group of people in your company. The reason for having a diverse group of people in your company is that it’s the right thing to do. If you tie the justification for diversity to data, then what happens should the data change? If a new study showed that diverse companies were less profitable, is that a reason to abandon diversity? Absolutely not! If your justification isn’t tied to numbers, then it hardly matters what the numbers say (though it does admitedly feel good to have your stance backed up).

By the way, this is also why I don’t think it’s a good idea to “sell” design systems on the basis of efficiency and cost-savings if the real reason you’re building one is to foster better collaboration and creativity. The fundamental purpose of a design system needs to be shared, not swapped out based on who’s doing the talking.

Anyway, back to accessibility…

A marketplace, to me, feels like exactly the wrong kind of place for accessibility to defend its existence. By its nature, accessibility isn’t a mainstream issue. I mean, think about it: it’s good that accessibility issues affect a minority of people. The fewer, the better. But even if the number of people affected by accessibility were to trend downwards and dwindle, the importance of accessibility should remain unchanged. Accessibility is important regardless of the numbers.

Look, if I make a website for a client, I don’t offer accessibility as a line item with a price tag attached. I build in accessibility by default because it’s the right thing to do. The only way to ensure that accessibility doesn’t get negotiated away is to make sure it’s not up for negotiation.

So that’s why I feel uncomfortable seeing accessibility features in a popularity contest.

I think that markets are great. I think competition is great. But I don’t think it works for everything (like, could you imagine applying marketplace economics to healthcare or prisons? Nightmare!). I concur with Iain M. Banks:

The market is a good example of evolution in action; the try-everything-and-see-what- -works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources.

If Igalia or Mozilla or Google or Apple implement an accessibility feature because they believe that accessibility is important and deserves prioritisation, that’s good. If they implement the same feature just because it received a lot of votes …that doesn’t strike me as a good thing.

I guess it doesn’t matter what the reason is as long as the end result is the same, right? But I suspect that what we’ll see is that the accessibility features up for bidding on Open Prioritization won’t be the winners.

Implementors

The latest newsletter from The History Of The Web is a good one: The Browser Engine That Could. It’s all about the history of browsers and more specifically, rendering engines.

Jay quotes from a 1992 email by Tim Berners-Lee when there was real concern about having too many different browsers. But as history played out, the concern shifted to having too few different browsers.

I wrote about this—back when Edge switched to using Chromium—in a post called Unity where I compared it to political parties:

If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

I talked about this some more with Brian and Stuart on the Igalia Chats podcast: Web Ecosystem Health (here’s the mp3 file).

In the discussion we dive deeper into the naunces of browser engine diversity; how it’s not the numbers that matter, but representation. The danger with one dominant rendering engine is that it would reflect one dominant set of priorities.

I think we’re starting to see this kind of battle between different sets of priorities playing out in the browser rendering engine landscape.

Webkit published a list of APIs they won’t be implementing in their current form because of security concerns around fingerprinting. Mozilla is taking the same stand. Google is much more gung-ho about implementing those APIs.

I think it’s safe to say that every implementor wants to ship powerful APIs and ensure security and privacy. The issue is with which gets priority. Using the language of principles and priorities, you could crudely encapsulate Apple and Mozilla’s position as:

Privacy, even over capability.

That design principle would pass the reversibility test. In fact, Google’s position might be represented as:

Capability, even over privacy.

I’m not saying Apple and Mozilla don’t value powerful APIs. I’m not saying Google doesn’t value privacy. I’m saying that Google’s priorities are different to Apple’s and Mozilla’s.

Alas, Alex is saying that Apple and Mozilla don’t value capability:

There is a contingent of browser vendors today who do not wish to expand the web platform to cover adjacent use-cases or meaningfully close the relevance gap that the shift to mobile has opened.

That’s very disappointing. It’s a cheap shot. As cheap as saying that, given Google’s business model, Chrome wouldn’t want to expand the web platform to provide better privacy and security.