Journal tags: web

269

sparkline

A polyfill for button type=”share”

After writing about a declarative Web Share API here yesterday I thought I’d better share the idea (see what I did there?).

I opened an issue on the Github repo for the spec.

(I hope that’s the right place for this proposal. I know that in the past ideas were kicked around on the Discourse site for Web platform Incubator Community Group but I can’t stand Discourse. It literally requires JavaScript to render anything to the screen even though the entire content is text. If it turns out that that is the place I should’ve posted, I guess I’ll hold my nose and do it using the most over-engineered reinvention of the browser I’ve ever seen. But I believe that the plan is for WICG to migrate proposals to Github anyway.)

I also realised that, as the JavaScript Web Share API already exists, I can use it to polyfill my suggestion for:

<button type="share">

The polyfill also demonstrates how feature detection could work. Here’s the code.

This polyfill takes an Inception approach to feature detection. There are three nested levels:

  1. This browser supports button type="share". Great! Don’t do anything. Otherwise proceed to level two.
  2. This browser supports the JavaScript Web Share API. Use that API to share the current page URL and title. Otherwise proceed to level three.
  3. Use a mailto: link to prefill an email with the page title as the subject and the URL in the body. Ya basic!

The idea is that, as long as you include the 20 lines of polyfill code, you could start using button type="share" in your pages today.

I’ve made a test page on Codepen. I’m just using plain text in the button but you could use a nice image or SVG or combination. You can use the Codepen test page to observe two of the three possible behaviours browsers could exhibit:

  1. A browser supports button type="share". Currently that’s none because I literally made this shit up yesterday.
  2. A browser supports the JavaScript Web Share API. This is Safari on Mac, Edge on Windows, Safari on iOS, and Chrome, Samsung Internet, and Firefox on Android.
  3. A browser supports neither button type="share" nor the existing JavaScript Web Share API. This is Firefox and Chrome on desktop (and Edge if you’re on a Mac).

See the Pen Polyfill for button type=”share" by Jeremy Keith (@adactio) on CodePen.

The polyfill doesn’t support Internet Explorer 11 or lower because it uses the DOM closest() method. Feel free to fork and rewrite if you need to support old IE.

A declarative Web Share API

I’ve written about the Web Share API before. It’s a handy little bit of JavaScript that—if supported—brings up a system-level way of sharing a page. Seeing as it probably won’t be long before we won’t be able to see full URLs in browsers anymore, it’s going to fall on us as site owners to provide this kind of fundamental access.

Right now the Web Share API exists entirely in JavaScript. There are quite a few browser APIs like that, and it always feels like a bit of a shame to me. Ideally there should be a JavaScript API and a declarative option, even if the declarative option isn’t as powerful.

Take form validation. To cover the most common use cases, you probably only need to use declarative markup like input type="email" or the required attribute. But if your use case gets a bit more complicated, you can reach for the Constraint Validation API in JavaScript.

I like that pattern. I wish it were an option for JavaScript-only APIs. Take the Geolocation API, for example. Right now it’s only available through JavaScript. But what if there were an input type="geolocation" ? It wouldn’t cover all use cases, but it wouldn’t have to. For the simple case of getting someone’s location (like getting someone’s email or telephone number), it would do. For anything more complex than that, that’s what the JavaScript API is for.

I was reminded of this recently when Ada Rose Cannon tweeted:

It really feels like there should be a semantic version of the share API, like a mailto: link

I think she’s absolutely right. The Web Share API has one primary use case: let the user share the current page. If that use case could be met in a declarative way, then it would have a lower barrier to entry. And for anyone who needs to do something more complicated, that’s what the JavaScript API is for.

But Pete LePage asked:

How would you feature detect for it?

Good question. With the JavaScript API you can do feature detection—if the API isn’t supported you can either bail or provide your own implementation.

There a few different ways of extending HTML that allow you to provide a fallback for non-supporting browsers.

You could mint a new element with a content model that says “Browsers, if you do support this element, ignore everything inside the opening and closing tags.” That’s the way that the canvas element works. It’s the same for audio and video—they ignore everything inside them that isn’t a source element. So developers can provide a fallback within the opening and closing tags.

But forging a new element would be overkill for something like the Web Share API (or Geolocation). There are more subtle ways of extending HTML that I’ve already alluded to.

Making a new element is a lot of work. Making a new attribute could also be a lot of work. But making a new attribute value might hit the sweet spot. That’s why I suggested something like input type="geolocation" for the declarative version of the Geolocation API. There’s prior art here; this is how we got input types for email, url, tel, color, date, etc. The key piece of behaviour is that non-supporting browsers will treat any value they don’t understand as “text”.

I don’t think there should be input type="share". The action of sharing isn’t an input. But I do think we could find an existing HTML element with an attribute that currently accepts a list of possible values. Adding one more value to that list feels like an inexpensive move.

Here’s what I suggested:

<button type=”share” value=”title,text”>

For non-supporting browsers, it’s a regular button and needs polyfilling, no different to the situation with the JavaScript API. But if supported, no JS needed?

The type attribute of the button element currently accepts three possible values: “submit”, “reset”, or “button”. If you give it any other value, it will behave as though you gave it a type of “submit” or “button” (depending on whether it’s in a form or not) …just like an unknown type value on an input element will behave like “text”.

If a browser supports button type="share”, then when the user clicks on it, the browser can go “Right, I’m handing over to the operating system now.”

There’s still the question of how to pass data to the operating system on what gets shared. Currently the JavaScript API allows you to share any combination of URL, text, and description.

Initially I was thinking that the value attribute could be used to store this data in some kind of key/value pairing, but the more I think about it, the more I think that this aspect should remain the exclusive domain of the JavaScript API. The declarative version could grab the current URL and the value of the page’s title element and pass those along to the operating system. If you need anything more complex than that, use the JavaScript API.

So what I’m proposing is:

<button type="share">

That’s it.

But how would you test for browser support? The same way as you can currently test for supported input types. Make use of the fact that an element’s attribute value and an element’s property value (which 99% of the time are the same), will be different if the attribute value isn’t supported:

var testButton = document.createElement("button");
testButton.setAttribute("type","share");
if (testButton.type != "share") {
// polyfill
}

So that’s my modest proposal. Extend the list of possible values for the type attribute on the button element to include “share” (or something like that). In supporting browsers, it triggers a very bare-bones handover to the OS (the current URL and the current page title). In non-supporting browsers, it behaves like a button currently behaves.

Web on the beach

It was very hot here in England last week. By late afternoon, the stuffiness indoors was too much to take.

If you can’t stand the heat, get out of the kitchen. That’s exactly what Jessica and I did. The time had come for us to avail of someone else’s kitchen. For the first time in many months, we ventured out for an evening meal. We could take advantage of the government discount scheme with the very unfortunate slogan, “eat out to help out.” (I can’t believe that no one in that meeting said something.)

Just to be clear, we wanted to dine outdoors. The numbers are looking good in Brighton right now, but we’re both still very cautious about venturing into indoor spaces, given everything we know now about COVID-19 transmission.

Fortunately for us, there’s a new spot on the seafront called Shelter Hall Raw. It’s a collective of multiple local food outlets and it has ample outdoor seating.

We found a nice table for two outside. Then we didn’t flag down a waiter.

Instead, we followed the instructions on the table. I say instructions, but it was a bit simpler than that. It was a URL: shelterhall.co.uk (there was also a QR code next to the URL that I could’ve just pointed my camera at, but I’ve developed such a case of QR code blindness that I blanked that out initially).

Just to be clear, under the current circumstances, this is the only way to place an order at this establishment. The only (brief) interaction you’ll have with another persn is when someone brings your order.

It worked a treat.

We had frosty beverages chosen from the excellent selection of local beers. We also had fried chicken sandwiches from Lost Boys chicken, purveyors of the best wings in town.

The whole experience was a testament to what the web can do. You browse the website. You make your choice on the website. You pay on the website (you can create an account but you don’t have to).

Thinking about it, I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

It hasn’t been a great week for the web. Layoffs at Mozilla. Google taking aim at URLs. It felt good to see experience an instance of the web really shining.

And it felt really good to have that cold beer.

Checked in at Shelter Hall Raw. Having a beer on the beach — with Jessica

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Feeds

A little while back, Marcus Herrmann wrote about making RSS more visible again with a /feeds page. Here’s his feeds page. Here’s Remy’s.

Seems like a good idea to me. I’ve made mine:

adactio.com/feeds

As well as linking to the usual RSS feeds (blog posts, links, notes), it’s also got an explanation of how you can subscribe to a customised RSS feed using tags.

Then, earlier today, I was chatting with Matt on Twitter and he asked:

btw do you share your blogroll anywhere?

So now I’ve added another URL:

adactio.com/feeds/subscriptions

That’s got a link to my OPML file, exported from my feed reader, and a list of the (current) RSS feeds that I’m subscribed to.

I like the idea of blogrolls making a comeback. And webrings.

Dark mode revisited

I added a dark mode to my website a while back. It was a fun thing to do during Indie Web Camp Amsterdam last year.

I tied the colour scheme to the operating system level. If you choose a dark mode in your OS, my website will adjust automatically thanks to the prefers-color-scheme: dark media query.

But I’ve seen notes from a few friends, not about my site specifically, but about how they like having an explicit toggle for dark mode (as well as the media query). Whenever I read those remarks, I’d think “I’m really not sure I’ve got time to deal with adding that kind of toggle to my site.”

But then I realised, “Jeremy, you absolute muffin! You’ve had a theme switcher on your website for almost two decades now!”

Doh! I had forgotten about that theme switcher. It dates back to the early days of CSS. I wanted my site to be a demonstration of how you could apply different styles to the same underlying markup (this was before the CSS Zen Garden came along). Those themes are very dated now, but if you like you can view my site with a Zeldman theme or a sci-fi theme.

To offer a dark-mode theme for my site, all I had to do was take the default stylesheet, pull out the custom properties from the prefers-color-scheme: dark media query, and done. It took less than five minutes.

So if you want to view my site in dark mode, it’s one of the options in the “Customise” dropdown on every page of the website.

A decade apart

Today marks ten years since the publication of HTML5 For Web Designers, the very first book from A Book Apart.

I’m so proud of that book, and so honoured that I was the first author published by the web’s finest purveyors of brief books. I mean, just look at the calibre of their output since my stumbling start!

Here’s what I wrote ten years ago.

Here’s what Jason wrote ten years ago.

Here’s what Mandy wrote ten years ago.

Here’s what Jeffrey wrote ten years ago.

They started something magnificent. Ten years on, with Katel at the helm, it’s going from strength to strength.

Happy birthday, little book! And happy birthday, A Book Apart! Here’s to another decade!

A Book Apart authors, 1-6

Reading

At the beginning of the year, Remy wrote about extracting Goodreads metadata so he could create his end-of-year reading list. More recently, Mark Llobrera wrote about how he created a visualisation of his reading history. In his case, he’s using JSON to store the information.

This kind of JSON storage is exactly what Tom Critchlow proposes in his post, Library JSON - A Proposal for a Decentralized Goodreads:

Thinking through building some kind of “web of books” I realized that we could use something similar to RSS to build a kind of decentralized GoodReads powered by indie sites and an underlying easy to parse format.

His proposal looks kind of similar to what Mark came up with. There’s a title, an author, an image, and some kind of date for when you started and/or finished reading the book.

Matt then points out that RSS gets close to the data format being suggested and asks how about using RSS?:

Rather than inventing a new format, my suggestion is that this is RSS plus an extension to deal with books. This is analogous to how the podcast feeds are specified: they are RSS plus custom tags.

Like Matt, I’m in favour of re-using existing wheels rather than inventing new ones, mostly to avoid a 927 situation.

But all of these proposals—whether JSON or RSS—involve the creation of a separate file, and yet the information is originally published in HTML. Along the lines of Matt’s idea, I could imagine extending the h-entry collection of class names to allow for books (or films, or other media). It already handles images (with u-photo). I think the missing fields are the date-related ones: when you start and finish reading. Those fields are present in a different microformat, h-event in the form of dt-start and dt-end. Maybe they could be combined:


<article class="h-entry h-event h-review">
<h1 class="p-name p-item">Book title</h1>
<img class="u-photo" src="image.jpg" alt="Book cover.">
<p class="p-summary h-card">Book author</p>
<time class="dt-start" datetime="YYYY-MM-DD">Start date</time>
<time class="dt-end" datetime="YYYY-MM-DD">End date</time>
<div class="e-content">Remarks</div>
<data class="p-rating" value="5">★★★★★</data>
<time class="dt-published" datetime="YYYY-MM-DDThh:mm">Date of this post</time>
</article>

That markup is simultaneously a post (h-entry) and an event (h-event) and you can even throw in h-card for the book author (as well as h-review if you like to rate the books you read). It can be converted to RSS and also converted to .ics for calendars—those parsers are already out there. It’s ready for aggregation and it’s ready for visualisation.

I publish very minimal reading posts here on adactio.com. What little data is there isn’t very structured—I don’t even separate the book title from the author. But maybe I’ll have a little play around with turning these h-entries into combined h-entry/event posts.

Future Sync 2020

I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.

The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.

The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).

As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.

Here are those questions and answers…

The first few questions were about last years’s CERN project, which opens the talk:

Based on what you now know from the CERN 2019 WorldWideWeb Rebuild project—what would you have done differently if you had been part of the original 1989 Team?

I responded:

Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.

The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:

https://adactio.com/journal/16099

Next question:

How excited were you when you initially got the call for such an amazing project?

My predictable response:

It was an unbelievable privilege! I was so excited the whole time—I still can hardly believe it really happened!

https://adactio.com/journal/14803

https://adactio.com/journal/14821

Later in the presentation, I talked about service workers and progressive web apps. I got a technical question about that:

Is there a limit to the amount of local storage a PWA can use?

I answered:

Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.

But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.

More worrying is the announcement from Apple to only store files for a week of browser use:

https://adactio.com/journal/16619

Finally, there was a question about the over-arching theme of the talk…

Great talk, Jeremy. Do you encounter push-back when using the term “Progressive Enhancement”?

My response:

Yes! …And that’s why I never once used the phrase “progressive enhancement” in my talk. 🙂

There’s a lot of misunderstanding of the term. Rather than correct it, I now avoid it:

https://adactio.com/journal/9195

Instead of using the phrase “progressive enhancement”, I now talk about the benefits and effects of the technique: resilience, universality, etc.

Future Sync Distributed 2020

Web Share API test

Remember a while back I wrote about some odd behaviour with the Web Share API in Safari on iOS?

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence.

Tess filed a bug soon after, which was very gratifying to see.

Now Phil has put together a test case:

  1. Share URL, title, and text
  2. Share URL and title
  3. Share URL and text

Very handy! The results (using the “copy” to clipboard action) are somewhat like rock, paper, scissors:

  • URL beats title,
  • text beats URL,
  • nothing beats text.

So it’s more like rock, paper, high explosives.

Plumbing

On Monday, I linked to Tom’s latest video. It uses a clever trick whereby the title of the video is updated to match the number of views the video has had. But there’s a lot more to the video than that. Stick around and you’ll be treated to a meditation on the changing nature of APIs, from a shared open lake to a closed commercial drybed.

It reminds me of (other) Tom’s post from a couple of year’s ago called Pouring one out for the Boxmakers, wherein he talks about Twitter’s crackdown on fun bots:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.

Both Toms echo the sentiment in Anil’s The Web We Lost, written back in 2012:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

I know, I know. We’re a bunch of old men shouting at The Cloud. But really, Anil is right:

This isn’t our web today. We’ve lost key features that we used to rely on, and worse, we’ve abandoned core values that used to be fundamental to the web world. To the credit of today’s social networks, they’ve brought in hundreds of millions of new participants to these networks, and they’ve certainly made a small number of people rich.

But they haven’t shown the web itself the respect and care it deserves, as a medium which has enabled them to succeed. And they’ve now narrowed the possibilites of the web for an entire generation of users who don’t realize how much more innovative and meaningful their experience could be.

In his video, Tom mentions Yahoo Pipes as an example of a service that has been shut down for commercial and idealogical reasons. In many ways, it was the epitome of what Anil was talking about—a sort of meta-API that allowed you to connect different services together. Kinda like IFTTT but with a visual interface that made it as empowering as something like the Scratch programming language.

There are services today that provide some of that functionality, but they’re more developer-focused. Trys pointed me to Pipedream, which looks good but you need to know how to write Node.js code and import npm packages. I’m sure it’s great if you’re into serverless Jamstack lambda thingamybobs but I don’t think it’s going to unlock the potential for non-coders to create cool stuff.

On the more visual pipes-esque Scratchy side, Cassie pointed me to Cables:

Cables is a tool for creating beautiful interactive content.

It isn’t about making mashups, but it does look something that non-coders could potentially use to make something that looks cool. It reminds me a bit of Bret Victor and his classic talk on Inventing On Principle—always worth revisting!

Apple’s attack on service workers

Apple aren’t the best at developer relations. But, bad as their communications can be, I’m willing to cut them some slack. After all, they’re not used to talking with the developer community.

John Wilander wrote a blog post that starts with some excellent news: Full Third-Party Cookie Blocking and More. Safari is catching up to Firefox and disabling third-party cookies by default. Wonderful! I’ve had third-party cookies disabled for a few years now, and while something occassionally breaks, it’s honestly a pretty great experience all around. Denying companies the ability to track users across sites is A Good Thing.

In the same blog post, John said that client-side cookies will be capped to a seven-day lifespan, as previously announced. Just to be clear, this only applies to client-side cookies. If you’re setting a cookie on the server, using PHP or some other server-side language, it won’t be affected. So persistent logins are still doable.

Then, in an audacious example of burying the lede, towards the end of the blog post, John announces that a whole bunch of other client-side storage technologies will also be capped to seven days. Most of the technologies are APIs that, like cookies, can be used to store data: Indexed DB, Local Storage, and Session Storage (though there’s no mention of the Cache API). At the bottom of the list is this:

Service Worker registrations

Okay, let’s clear up a few things here (because they have been so poorly communicated in the blog post)…

The seven day timer refers to seven days of Safari usage, not seven calendar days (although, given how often most people use their phones, the two are probably interchangable). So if someone returns to your site within a seven day period of using Safari, the timer resets to zero, and your service worker gets a stay of execution. Lucky you.

This only applies to Safari. So if your site has been added to the home screen and your web app manifest has a value for the “display” property like “standalone” or “full screen”, the seven day timer doesn’t apply.

That piece of information was missing from the initial blog post. Since the blog post was updated to include this clarification, some people have taken this to mean that progressive web apps aren’t affected by the upcoming change. Not true. Only progressive web apps that have been added to the home screen (and that have an appropriate “display” value) will be spared. That’s a vanishingly small percentage of progressive web apps, especially on iOS. To add a site to the home screen on iOS, you need to dig and scroll through the share menu to find the right option. And you need to do this unprompted. There is no ambient badging in Safari to indicate that a site is installable. Chrome’s install banner isn’t perfect, but it’s better than nothing.

Just a reminder: a progressive web app is a website that

  • runs on HTTPS,
  • has a service worker,
  • and a web manifest.

Adding to the home screen is something you can do with a progressive web app (or any other website). It is not what defines progressive web apps.

In any case, this move to delete service workers after seven days of using Safari is very odd, and I’m struggling to find the connection to the rest of the blog post, which is about technologies that can store data.

As I understand it, with the crackdown on setting third-party cookies, trackers are moving to first-party technologies. So whereas in the past, a tracking company could tell its customers “Add this script element to your pages”, now they have to say “Add this script element and this script file to your pages.” That JavaScript file can then store a unique idenitifer on the client. This could be done with a cookie, with Local Storage, or with Indexed DB, for example. But I’m struggling to understand how a service worker script could be used in this way. I’d really like to see some examples of this actually happening.

The best explanation I can come up with for this move by Apple is that it feels like the neatest solution. That’s neat as in tidy, not as in nifty. It is definitely not a nifty solution.

If some technologies set by a specific domain are being purged after seven days, then the tidy thing to do is purge all technologies from that domain. Service workers are getting included in that dragnet.

Now, to be fair, browsers and operating systems are free to clean up storage space as they see fit. Caches, Local Storage, Indexed DB—all of those are subject to eventually getting cleaned up.

So I was curious. Wanting to give Apple the benefit of the doubt, I set about trying to find out how long service worker registrations currently last before getting deleted. Maybe this announcement of a seven day time limit would turn out to be not such a big change from current behaviour. Maybe currently service workers last for 90 days, or 60, or just 30.

Nope:

There was no time limit previously.

This is not a minor change. This is a crippling attack on service workers, a technology specifically designed to improve the user experience for return visits, whether it’s through improved performance or offline access.

I wouldn’t be so stunned had this announcement come with an accompanying feature that would allow Safari users to know when a website is a progressive web app that can be added to the home screen. But Safari continues to ignore the existence of progressive web apps. And now it will actively discourage people from using service workers.

If you’d like to give feedback on this ludicrous development, you can file a bug (down in the cellar in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”).

No doubt there will still be plenty of Apple apologists telling us why it’s good that Safari has wished service workers into the cornfield. But make no mistake. This is a terrible move by Apple.

I will say this though: given The Situation we’re all living in right now, some good ol’ fashioned Hot Drama by a browser vendor behaving badly feels almost comforting.

Outlet

We’re all hunkering down in our homes. That seems to be true of our online homes too.

People are sharing their day-to-day realities on their websites and I’m here for it. Like, I’m literally here for it. I can’t go anywhere.

On an episode of the Design Observer podcast, Jessica Helfand puts this into context:

During times of crisis, people want to make things. There’s a surge in the keeping of journals when there’s a war… it’s a response to the feeling of vulnerability, like corporeal vulnerability. My life is under attack. I am imprisoned in my house. I have to make something to say I was here, to say I mattered, to say this day happened… It’s like visual graphic reassurance.

It’s not just about crisis though. Scott Kelly talks about the value of keeping a journal during prolonged periods of repitition. And he should know—he spent a year in space:

NASA has been studying the effects of isolation on humans for decades, and one surprising finding they have made is the value of keeping a journal. Throughout my yearlong mission, I took the time to write about my experiences almost every day. If you find yourself just chronicling the days’ events (which, under the circumstances, might get repetitive) instead try describing what you are experiencing through your five senses or write about memories. Even if you don’t wind up writing a book based on your journal like I did, writing about your days will help put your experiences in perspective and let you look back later on what this unique time in history has meant.

That said, just stringing a coherent sentence together can seem like too much during The Situation. That’s okay. Your online home can also provide relief and distraction through tidying up. As Ethan puts it:

let a website be a worry stone

It can be comforting to get into the zone doing housekeeping on your website. How about a bit of a performance audit? Or maybe look into more fluid typography? Or perhaps now is the time to tinker about with that dark mode you’ve been planning?

Whatever you end up doing, my point is that your website is quite literally an outlet. While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Indie Web Camp London 2020

Do you have plans for the weekend of March 14th and 15th?

If you live anywhere near London, might I suggest that you sign up for Indie Web Camp.

Cheuk and Ana are putting it together with assistance from Calum. As always, there will be one day of Barcamp-style discussions, followed by a fun hands-on day of making.

If you’re wondering whether this is for you, ask yourself if any of this situations apply:

  • You don’t have your own website yet, but you want one.
  • You have your own website, but you need some help with it.
  • You have some ideas about the independent web.
  • You have your own website but you never seem to find the time to update it.
  • You’d like to help other people with their websites.

If you recognise yourself in any one of those scenarios, then you should definitely come along to Indie Web Camp London 2020!

Install prompt

There’s an interesting thread on Github about the tongue-twistingly named beforeinstallpromt JavaScript event.

Let me back up…

Progressive web apps. You know what they are, right? They’re websites that have taken their vitamins. Specifically, they’re responsive websites that:

  1. are served over HTTPS,
  2. have a web app manifest, and
  3. have a service worker handling the offline scenario.

The web app manifest—a JSON file of metadata—is particularly useful for describing how your site should behave if someone adds it to their home screen. You can specify what icon should be used. You can specify whether the site should launch in a browser or as a standalone app (practically indistinguishable from a native app). You can specify which URL on the site should be used as the starting point when the site is launched from the home screen.

So progressive web apps work just fine when you visit them in a browser, but they really shine when you add them to your home screen. It seems like pretty much everyone is in agreement that adding a progressive web app to your home screen shouldn’t be an onerous task. But how does the browser let the user know that it might be a good idea to “install” the web site they’re looking at?

The Samsung Internet browser does ambient badging—a + symbol shows up to indicate that a website can be installed. This is a great approach!

I hope that Chrome on Android will also use ambient badging at some point. To start with though, Chrome notified users that a site was installable by popping up a notification at the bottom of the screen. I think these might be called “toasts”.

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome. And there’s the “add to home screen” prompt for https://html5forwebdesigners.com/ HTTPS + manifest.json + Service Worker = “Add to Home Screen” prompt. Add to home screen.

Needless to say, the toast notification wasn’t very effective. That’s because we web designers and developers have spent years teaching people to immediately dismiss those notifications without even reading them. Accept our cookies! Sign up to our newsletter! Install our native app! Just about anything that’s user-hostile gets put in a notification (either a toast or an overlay) and shoved straight in the user’s face before they’ve even had time to start reading the content they came for in the first place. Users will then either:

  1. turn around and leave, or
  2. use muscle memory reach for that X in the corner of the notification.

A tiny fraction of users might actually click on the call to action, possibly by mistake.

Chrome didn’t abandon the toast notification for progressive web apps, but it did change when they would appear. Rather than the browser deciding when to show the prompt—usually when the user has just arrived on the site—a new JavaScript event called beforeinstallprompt can be used.

It’s a bit weird though. You have to “capture” the event that fires when the prompt would have normally been shown, subdue it, hold on to that event, and then re-release it when you think it should be shown (like when the user has completed a transaction, for example, and having your site on the home screen would genuinely be useful). That’s a lot of hoops. Here’s the code I use on The Session to only show the installation prompt to users who are logged in.

The end result is that the user is still shown a toast notification, but at least this time it’s the site owner who has decided when it will be shown. The Chrome team call this notification “the mini-info bar”, and Pete acknowledges that it’s not ideal:

The mini-infobar is an interim experience for Chrome on Android as we work towards creating a consistent experience across all platforms that includes an install button into the omnibox.

I think “an install button in the omnibox” means ambient badging in the browser interface, which would be great!

Anyway, back to that thread on Github. Basically, neither Apple nor Mozilla are going to implement the beforeinstallprompt event (well, technically Mozilla have implemented it but they’re not going to ship it). That’s fair enough. It’s an interim solution that’s not ideal for all reasons I’ve already covered.

But there’s a lot of pushback. Even if the details of beforeinstallprompt are troublesome, surely there should be some way for site owners to let users know that can—or should—install a progressive web app? As a site owner, I have a lot of sympathy for that viewpoint. But I also understand the security and usability issues that can arise from bad actors abusing this mechanism.

Still, I have to hand it to Chrome: even if we put the beforeinstallprompt event to one side, the browser still has a mechanism for letting users know that a progressive web app can be installed—the mini info bar. It’s not a great mechanism, but it’s better than nothing. Nothing is precisely what Firefox and Safari currently offer (though Firefox is experimenting with something).

In the case of Safari, not only do they not provide a mechanism for letting the user know that a site can be installed, but since the last iOS update, they’ve buried the “add to home screen” option even deeper in the “sharing sheet” (the list of options that comes up when you press the incomprehensible rectangle-with-arrow-emerging-from-it icon). You now have to scroll below the fold just to find the “add to home screen” option.

So while I totally get the misgivings about beforeinstallprompt, I feel that a constructive alternative wouldn’t go amiss.

And that’s all I have to say about that.

Except… there’s another interesting angle to that Github thread. There’s talk of allowing sites that are launched from the home screen to have access to more features than a site inside a web browser. Usually permissions on the web are explicitly granted or denied on a case-by-case basis: geolocation; notifications; camera access, etc. I think this is the first time I’ve heard of one action—adding to the home screen—being used as a proxy for implicitly granting more access. Very interesting. Although that idea seems to be roundly rejected here:

A key argument for using installation in this manner is that some APIs are simply so powerful that the drive-by web should not be able to ask for them. However, this document takes the position that installation alone as a restriction is undesirable.

Then again:

I understand that Chromium or Google may hold such a position but Apple’s WebKit team may not necessarily agree with such a position.

2019 in numbers

I posted to adactio.com 1,600 times in 2019: sparkline

In amongst those notes were:

If you like, you can watch all that activity plotted on a map.

map

Away from this website in 2019:

Words I wrote in 2019

I wrote just over one hundred blog posts in 2019. That’s even more than I wrote in 2018, which I’m very happy with.

Here are eight posts from during the year that I think are a good representative sample. I like how these turned out.

I hope that I’ll write as many blog posts in 2020.

I’m pretty sure that I will also continue to refer to them as blog posts, not blogs. I may be the last holdout of this nomenclature in 2020. I never planned to die on this hill, but here we are.

Actually, seeing as this is technically my journal rather than my blog, I’ll just call them journal entries.

Here’s to another year of journal entries.

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.