Journal tags: development

216

sparkline

Apple’s attack on service workers

Apple aren’t the best at developer relations. But, bad as their communications can be, I’m willing to cut them some slack. After all, they’re not used to talking with the developer community.

John Wilander wrote a blog post that starts with some excellent news: Full Third-Party Cookie Blocking and More. Safari is catching up to Firefox and disabling third-party cookies by default. Wonderful! I’ve had third-party cookies disabled for a few years now, and while something occassionally breaks, it’s honestly a pretty great experience all around. Denying companies the ability to track users across sites is A Good Thing.

In the same blog post, John said that client-side cookies will be capped to a seven-day lifespan, as previously announced. Just to be clear, this only applies to client-side cookies. If you’re setting a cookie on the server, using PHP or some other server-side language, it won’t be affected. So persistent logins are still doable.

Then, in an audacious example of burying the lede, towards the end of the blog post, John announces that a whole bunch of other client-side storage technologies will also be capped to seven days. Most of the technologies are APIs that, like cookies, can be used to store data: Indexed DB, Local Storage, and Session Storage (though there’s no mention of the Cache API). At the bottom of the list is this:

Service Worker registrations

Okay, let’s clear up a few things here (because they have been so poorly communicated in the blog post)…

The seven day timer refers to seven days of Safari usage, not seven calendar days (although, given how often most people use their phones, the two are probably interchangable). So if someone returns to your site within a seven day period of using Safari, the timer resets to zero, and your service worker gets a stay of execution. Lucky you.

This only applies to Safari. So if your site has been added to the home screen and your web app manifest has a value for the “display” property like “standalone” or “full screen”, the seven day timer doesn’t apply.

That piece of information was missing from the initial blog post. Since the blog post was updated to include this clarification, some people have taken this to mean that progressive web apps aren’t affected by the upcoming change. Not true. Only progressive web apps that have been added to the home screen (and that have an appropriate “display” value) will be spared. That’s a vanishingly small percentage of progressive web apps, especially on iOS. To add a site to the home screen on iOS, you need to dig and scroll through the share menu to find the right option. And you need to do this unprompted. There is no ambient badging in Safari to indicate that a site is installable. Chrome’s install banner isn’t perfect, but it’s better than nothing.

Just a reminder: a progressive web app is a website that

  • runs on HTTPS,
  • has a service worker,
  • and a web manifest.

Adding to the home screen is something you can do with a progressive web app (or any other website). It is not what defines progressive web apps.

In any case, this move to delete service workers after seven days of using Safari is very odd, and I’m struggling to find the connection to the rest of the blog post, which is about technologies that can store data.

As I understand it, with the crackdown on setting third-party cookies, trackers are moving to first-party technologies. So whereas in the past, a tracking company could tell its customers “Add this script element to your pages”, now they have to say “Add this script element and this script file to your pages.” That JavaScript file can then store a unique idenitifer on the client. This could be done with a cookie, with Local Storage, or with Indexed DB, for example. But I’m struggling to understand how a service worker script could be used in this way. I’d really like to see some examples of this actually happening.

The best explanation I can come up with for this move by Apple is that it feels like the neatest solution. That’s neat as in tidy, not as in nifty. It is definitely not a nifty solution.

If some technologies set by a specific domain are being purged after seven days, then the tidy thing to do is purge all technologies from that domain. Service workers are getting included in that dragnet.

Now, to be fair, browsers and operating systems are free to clean up storage space as they see fit. Caches, Local Storage, Indexed DB—all of those are subject to eventually getting cleaned up.

So I was curious. Wanting to give Apple the benefit of the doubt, I set about trying to find out how long service worker registrations currently last before getting deleted. Maybe this announcement of a seven day time limit would turn out to be not such a big change from current behaviour. Maybe currently service workers last for 90 days, or 60, or just 30.

Nope:

There was no time limit previously.

This is not a minor change. This is a crippling attack on service workers, a technology specifically designed to improve the user experience for return visits, whether it’s through improved performance or offline access.

I wouldn’t be so stunned had this announcement come with an accompanying feature that would allow Safari users to know when a website is a progressive web app that can be added to the home screen. But Safari continues to ignore the existence of progressive web apps. And now it will actively discourage people from using service workers.

If you’d like to give feedback on this ludicrous development, you can file a bug (down in the cellar in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”).

No doubt there will still be plenty of Apple apologists telling us why it’s good that Safari has wished service workers into the cornfield. But make no mistake. This is a terrible move by Apple.

I will say this though: given The Situation we’re all living in right now, some good ol’ fashioned Hot Drama by a browser vendor behaving badly feels almost comforting.

Oh, embed!

I wrote yesterday about how messing about on your own website can be a welcome distraction. I did some tinkering with adactio.com on the weekend that you might be interested in.

Let me set the scene…

I’ve started recording and publishing a tune a day. I grab my mandolin, open up Quicktime and make a movie of me playing a jig, a reel, or some other type of Irish tune. I include a link to that tune on The Session and a screenshot of the sheet music for anyone who wants to play along. And I embed the short movie clip that I’ve uploaded to YouTube.

Now it’s not the first time I’ve embedded YouTube videos into my site. But with the increased frequency of posting a tune a day, the front page of adactio.com ended up with multiple embeds. That is not good for performance—my Lighthouse score took quite a hit. Worst of all, if a visitor doesn’t end up playing an embedded video, all of the markup, CSS, and JavaScript in the embedded iframe has been delivered for nothing.

Meanwhile over on The Session, I’ve got a strategy for embedding YouTube videos that’s better for performance. Whenever somebody posts a link to a video on YouTube, the thumbnail of the video is embedded. Only when you click the thumbnail does that image get swapped out for the iframe with the video.

That’s what I needed to do here on adactio.com.

First off, I should explain how I’m embedding things generally ‘round here. Whenever I post a link or a note that has a URL in it, I run that URL through a little PHP script called getEmbedCode.php.

That code checks to see if the URL is from a service that provides an oEmbed endpoint. A what-Embed? oEmbed!

oEmbed is like a minimum viable read-only API. It was specced out by Leah and friends years back. You ping a URL like this:

http://example.com/oembed?url=https://example.com/thing

In this case http://example.com/oembed is the endpoint and url is the value of a URL from that provider. Here’s a real life example from YouTube:

https://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=-eiqhVmSPcs

So https://www.youtube.com/oembed is the endpoint and url is the address of any video on YouTube.

You get back some JSON with a pre-defined list of values like title and html. That html payload is the markup for your embed code.

By default, YouTube sends back markup like this:

<iframe
width="480"
height="270"
src="https://www.youtube.com/embed/-eiqhVmSPcs?feature=oembed"
frameborder="0
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen>
</iframe>

But now I want to use an img instead of an iframe. One of the other values returned is thumbnail_url. That’s the URL of a thumbnail image that looks something like this:

https://i.ytimg.com/vi/-eiqhVmSPcs/hqdefault.jpg

In fact, once you know the ID of a YouTube video (the ?v= bit in a YouTube URL), you can figure out the path to multiple images of different sizes:

(Although that last one—maxresdefault.jpg—might not work for older videos.)

Okay, so I need to extract the ID from the YouTube URL. Here’s the PHP I use to do that:

parse_str(parse_url($url, PHP_URL_QUERY), $arguments);
$id = $arguments['v'];

Then I can put together some HTML like this:

<div>
<a class="videoimglink" href="'.$url.'">
<img width="100%" loading="lazy"
src="https://i.ytimg.com/vi/'.$id.'/default.jpg"
alt="'.$response['title'].'"
srcset="
https://i.ytimg.com/vi/'.$id.'/mqdefault.jpg 320w,
https://i.ytimg.com/vi/'.$id.'/hqdefault.jpg 480w,
https://i.ytimg.com/vi/'.$id.'/maxresdefault.jpg 1280w
">
</a>
</div>

Now I’ve got a clickable responsive image that links through to the video on YouTube. Time to enhance. I’m going to add a smidgen of JavaScript to listen for a click on that link.

Over on The Session, I’m using addEventListener but here on adactio.com I’m going to be dirty and listen for the event directly in the markup using the onclick attribute.

When the link is clicked, I nuke the link and the image using innerHTML. This injects an iframe where the link used to be (by updating the innerHTML value of the link’s parentNode).

onclick="event.preventDefault();
this.parentNode.innerHTML='<iframe src=https://www.youtube-nocookie.com/embed/'.$id.'?autoplay=1></iframe>'"

But notice that I’m not using the default YouTube URL for the iframe. That would be:

https://www.youtube.com/embed/-eiqhVmSPcs

Instead I’m swapping out the domain youtube.com for youtube-nocookie.com:

https://www.youtube-nocookie.com/embed/-eiqhVmSPcs

I can’t remember where I first came across this undocumented parallel version of YouTube that has, yes, you guessed it, no cookies. It turns out that, not only is the default YouTube embed code bad for performance, it is—unsurprisingly—bad for privacy too. So the youtube-nocookie.com domain can protect your site’s visitors from intrusive tracking. Pass it on.

Anyway, I’ve got the markup I want now:

<div>
<a class="videoimglink" href="https://www.youtube.com/watch?v=-eiqhVmSPcs"
onclick="event.preventDefault();
this.parentNode.innerHTML='<iframe src=https://www.youtube-nocookie.com/embed/-eiqhVmSPcs?autoplay=1></iframe>'">
<img width="100%" loading="lazy"
src="https://i.ytimg.com/vi/-eiqhVmSPcs/default.jpg"
alt="The Banks Of Lough Gowna (jig) on mandolin"
srcset="
https://i.ytimg.com/vi/-eiqhVmSPcs/mqdefault.jpg 320w,
https://i.ytimg.com/vi/-eiqhVmSPcs/hqdefault.jpg 480w,
https://i.ytimg.com/vi/-eiqhVmSPcs/maxresdefault.jpg 1280w
">
</a>
</div>

The functionality is all there. But I want to style the embedded images to look more like playable videos. Time to break out some CSS (this is why I added the videoimglink class to the YouTube link).

.videoimglink {
    display: block;
    position: relative;
}

I’m going to use generated content to create a play button icon. Because I can’t use generated content on an img element, I’m applying these styles to the containing .videoimglink a element.

.videoimglink::before {
    content: '▶';
}

I was going to make an SVG but then I realised I could just be lazy and use the unicode character instead.

Right. Time to draw the rest of the fucking owl:

.videoimglink::before {
    content: '▶';
    display: inline-block;
    position: absolute;
    background-color: var(--background-color);
    color: var(--link-color);
    border-radius: 50%;
    width: 10vmax;
    height: 10vmax;
    top: calc(50% - 5vmax);
    left: calc(50% - 5vmax);
    font-size: 6vmax;
    text-align: center;
    text-indent: 1vmax;
    opacity: 0.5;
}

That’s a bunch of instructions for sizing and positioning. I’d explain it, but that would require me to understand it and frankly, I’m not entirely sure I do. But it works. I think.

With a translucent play icon positioned over the thumbnail, all that’s left is to add a :hover style to adjust the opacity:

.videoimglink:hover::before,
.videoimglink:focus::before {
    opacity: 0.75;
}

Wheresoever thou useth :hover, thou shalt also useth :focus.

Okay. It’s good enough. Ship it!

The Banks Of Lough Gowna (jig) on mandolin

If you embed YouTube videos on your site, and you’d like to make them more performant, check out this custom element that Paul made: Lite YouTube Embed. And here’s a clever technique that uses the srcdoc attribute to get a similar result (but don’t forget to use the youtube-nocookie.com domain).

Lighthouse bookmarklet

I use Firefox. You should too. It’s fast, secure, and more privacy-focused than the leading browser from the big G.

When it comes to web development, the CSS developer tooling in Firefox is second-to-none. But when it comes to JavaScript and network-related debugging (like service workers), Chrome’s tools are currently better than Firefox’s (for now). For example, Chrome has a tab in its developer tools that lets you run Lighthouse on the currently open tab.

Yesterday, I got the Calibre newsletter, which always has handy performance-related links from Karolina. She pointed to a Lighthouse extension for Firefox. “Excellent!”, I thought, and I immediately installed it. But I had some qualms about installing a plug-in from Google into a browser from Mozilla, particularly as the plug-in page says:

This is not a Recommended Extension. Make sure you trust it before installing

Well, I gave it a go. It turns out that all it actually does is redirect to the online version of Lighthouse. “Hang on”, I thought. “This could just be a bookmarklet!”

So I immediately uninstalled the browser extension and made this bookmarklet:

Lighthouse

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you’re on a site that you want to test.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Utopia

Trys and James recently unveiled their Utopia project. They’ve been tinkering away at it behind the scenes for quite a while now.

You can check out the website and read the blog to get the details of how it accomplishes its goal:

Elegantly scale type and space without breakpoints.

I may well be biased, but I really like this project. I’ve been asking myself why I find it so appealing. Here are a few of the attributes of Utopia that strike a chord with me…

It’s collaborative

Collaboration is at the heart of Clearleft’s work. I know everyone says that, but we’ve definitely seen a direct correlation: projects with high levels of collaboration are invariably more successful than projects where people are siloed.

The genesis for Utopia came about after Trys and James worked together on a few different projects. It’s all too easy to let design and development splinter off into their own caves, but on these projects, Trys and James were working (literally) side by side. This meant that they could easily articulate frustrations to one another, and more important, they could easily share their excitement.

The end result of their collaboration is some very clever code. There’s an irony here. This code could be used to discourage collaboration! After all, why would designers and developers sit down together if they can just pass these numbers back and forth?

But I don’t think that Utopia will appeal to designers and developers who work in that way. Born in the spirit of collaboration, I suspect that it will mostly benefit people who value collaboration.

It’s intrinsic

If you’re a control freak, you may not like Utopia. The idea is that you specify the boundaries of what you’re trying to accomplish—minimum/maximum font sizes, minumum/maximum screen sizes, and some modular scales. Then you let the code—and the browser—do all the work.

On the one hand, this feels like surrending control. But on the other hand, because the underlying system is so robust, it’s a way of guaranteeing quality, even in situations you haven’t accounted for.

If someone asks you, “What size will the body copy be when the viewport is 850 pixels wide?”, your answer would have to be “I don’t know …but I do know that it will be appropriate.”

This feels like a very declarative way of designing. It reminds me of the ethos behind Andy and Heydon’s site, Every Layout. They call it algorithmic layout design:

Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices.

See how breakpoints are mentioned as being a very top-down approach to layout? Remember the tagline for Utopia, which aims for fluid responsive design?

Elegantly scale type and space without breakpoints.

Unsurprisingly, Andy really likes Utopia:

As the co-author of Every Layout, my head nearly fell off from all of the nodding when reading this because this is the exact sort of approach that we preach: setting some rules and letting the browser do the rest.

Heydon describes this mindset as automating intent. I really like that. I think that’s what Utopia does too.

As Heydon said at Patterns Day:

Be your browser’s mentor, not its micromanager.

The idea is that you give it rules, you give it axioms or principles to work on, and you let it do the calculation. You work with the in-built algorithms of the browser and of CSS itself.

This is all possible thanks to improvements to CSS like calc, flexbox and grid. Jen calls this approach intrinsic web design. Last year, I liveblogged her excellent talk at An Event Apart called Designing Intrinsic Layouts.

Utopia feels like it has the same mindset as algorithmic layout design and intrinsic web design. Trys and James are building on the great work already out there, which brings me to the final property of Utopia that appeals to me…

It’s iterative

There isn’t actually much that’s new in Utopia. It’s a combination of existing techniques. I like that. As I said recently:

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

First of all, Utopia uses the idea of modular scales in typography. Tim Brown has been championing this idea for years.

Then there’s the idea of typography being fluid and responsive—just like Jason Pamental has been speaking and writing about.

On the code side, Utopia wouldn’t be possible without the work of Mike Reithmuller and his breakthroughs on responsive and fluid typography, which led to Tim’s work on CSS locks.

Utopia takes these building blocks and combines them. So if you’re wondering if it would be a good tool for one of your projects, you can take an equally iterative approach by asking some questions…

Are you using fluid type?

Do your font-sizes increase in proportion to the width of the viewport? I don’t mean in sudden jumps with @media breakpoints—I mean some kind of relationship between font size and the vw (viewport width) unit. If so, you’re probably using some kind of mechanism to cap the minimum and maximum font sizes—CSS locks.

I’m using that technique on Resilient Web Design. But I’m not changing the relative difference between different sized elements—body copy, headings, etc.—as the screen size changes.

Are you using modular scales?

Does your type system have some kind of ratio that describes the increase in type sizes? You probably have more than one ratio (unlike Resilient Web Design). The ratio for small screens should probably be smaller than the ratio for big screens. But rather than jump from one ratio to another at an arbitrary breakpoint, Utopia allows the ratio to be fluid.

So it’s not just that font sizes are increasing as the screen gets larger; the comparative difference is also subtly changing. That means there’s never a sudden jump in font size at any time.

Are you using custom properties?

A technical detail this, but the magic of Utopia relies on two powerful CSS features: calc() and custom properties. These two workhorses are used by Utopia to generate some CSS that you can stick at the start of your stylesheet. If you ever need to make changes, all the parameters are defined at the top of the code block. Tweak those numbers and watch everything cascade.

You’ll see that there’s one—and only one—media query in there. This is quite clever. Usually with CSS locks, you’d need to have a media query for every different font size in order to cap its growth at the maximum screen size. With Utopia, the maximum screen size—100vw—is abstracted into a variable (a custom property). The media query then changes its value to be the upper end of your CSS lock. So it doesn’t matter how many different font sizes you’re setting: because they all use that custom property, one single media query takes care of capping the growth of every font size declaration.

If you’re already using CSS locks, modular scales, and custom properties, Utopia is almost certainly going to be a good fit for you.

If you’re not yet using those techniques, but you’d like to, I highly recommend using Utopia on your next project.

Hydration

As you may have noticed, I’m a fan of progressive enhancement.

It’s not cool. It’s often at odds with “modern” web development, so I end up looking like an old man yelling at a cloud to get off my lawn. Or something.

At its heart though, progressive enhancement seems fairly uncontroversial and inoffensive to me. It’s an approach. A mindset. Here’s how I describe it in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Progressive enhancement makes use of the principle of least power:

Choose the least powerful language suitable for a given purpose.

That’s step two of the three-step process. But the third step is vital.

I think a lot of the hostility towards progressive enhancement comes from a misunderstanding of that three-step process, perhaps thinking that it stops at step two. I’m sure that some have intrepreted progressive enhancement as preventing developers from using the latest and greatest technology. Nothing could be further from the truth!

Taking a layered approach to building on the web gives you permission to try cutting‐edge JavaScript APIs, regardless of how many or how few browsers currently implement them.

The most common misunderstanding of progressive enhancement is that it’s inherently about JavaScript. That’s not true. You can apply progressive enhancement at every step of front-end development: HTML, CSS, and JavaScript.

But because of JavaScript’s strict error-handling model (at least compared to HTML and CSS), it’s in the JavaScript layer that the lack of a progressive enhancement mindset is most often felt.

That’s why I was saddened by the rise of frameworks and mindsets that assume the availability of JavaScript. Single page apps generally follow this assumption. Everything is delivered via JavaScript: content, markup, styles, and behaviour.

This leads to a terrible situation for performance. The user is left staring at a blank screen, waiting for something—anything!—to appear. Browsers are optimised to stream HTML as soon as they can. Delivering your content via JavaScript rather than HTML means you’re not taking advantage of that optimisation. Your users suffer.

But I was very heartened when I saw the pendulum start to swing back the other way a bit…

Let’s say you’re using a JavaScript framework like React. But the reason you’re using it isn’t because you’re doing anything particularly complex in the browser involving state management. You might be using React because you really like the way it encourages modularity and componentisation.

A few years ago, making a single page app was pretty much the only way you could use React. For you as a developer to experience the benefits of modularity and componentisation, users had to pay the price in the payload (and fragility) of client-side JavaScript.

That’s no longer the case. Now that we can run JavaScript on the server, it’s possible to build in a modular, componentised way and still use progressive enhancement.

When I first heard about Gatsby and Next.js, I thought that was the selling point. Run React on the server; send pre-generated HTML down the wire to the user; then enhance with client-side JavaScript.

But that’s not exactly how it works. The pre-generated HTML isn’t functional. It still needs a bucketload of JavaScript before it can do anything. The actual process is: Run React on the server; send pre-generated HTML down the wire to the user; then send everything again but this time in JavaScript, bundled with the entire React library.

This leads to a situation for users that’s almost worse than before. Instead of staring at a blank screen, now they get HTML lickety-split—excellent! But if they try to interact with what’s on screen, they’ll find that nothing is working yet. Even worse, once the JavaScript is delivered, and is being parsed, they probably can’t even scroll—their device is too busy interpreting all that JavaScript. Your users suffer.

All your content is sent twice. First HTML is sent from the server. These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser). Then a JavaScript library—plus all your bespoke JavaScript—is loaded. Then all your content is loaded again as JSON.

So you’ve got a facade of an interface that you can’t actually interact with until a deluge of JavaScript has been loaded, parsed and executed. The term used for this stage of the process is “hydration”, which makes it sound more like a relaxing treatment from Gwyneth Paltrow than the horrible user experience it is.

The idea is that subsequent navigations—which will happen with Ajax—should be snappy. But the price has already been paid by then. The initial loading experience is jagged and frustrating.

Don’t get me wrong: server-side rendering is great …if what you’re sending from the server is functional. It’s the combination of hollow HTML sent from the server, followed by a huge browser-freezing dump of JavaScript that is an anti-pattern.

This use of server-side rendering followed by hydration feels like progressive enhancement, because it separates out the delivery of markup and scripts. But it’s missing the mindset.

The layered approach of progressive enhancement echoes the separation of concerns in the front-end stack: HTML, CSS, and JavaScript—each layer expressing more power. But while these concepts are related, they’re not interchangable. Separating out the layers of your tech stack isn’t necessarily progressive enhancement. If you have some HTML that relies on JavaScript to be useful, then there’s no benefit in separating that HTML into a separate payload. The HTML that you initially send down the wire needs to be functional (at least at a basic level) before the JavaScript arrives.

I was a little disappointed to see Kyle Simpson—who I admire greatly—conflate separation of concerns with progressive enhancement in his talk from JSCamp 2019:

This content is here. I can see it, and it’s even styled. But I can’t click on the damn button because nothing has loaded in the JavaScript layer yet.

Anybody experienced that where you’ve been on a web page and it’s not really fully functional yet? I can see something but I can’t actually make any usage of it yet.

These are all things that cropped out of our thought process that said: “Let’s build the web in layers. Let’s deliver it progressively in layers. Because that’s morally right. We call this progressive enhancement. And let’s not worry too much about all these potential user experience flaws that may happen.”

That’s a spot-on description of server-side rendering and hydration, but it’s a gross mischaracterisation of progressive enhancement.

That button that requires JavaScript to work? That should’ve been generated with JavaScript. (For example, if you’re building a complex web app, consider sending a read-only view down the wire in HTML—then add any interactive interface elements with JavaScript in the browser.)

If people are equating progressive enhancement with thoughtless server-side rendering and hydration, then I can see why they’d be hostile towards it.

Users would be better served with unprogressive non-enhancement:

You take some structured content, which follows the vertical flow of the document in a way that everyone understands.

Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.

Or if they’re using a touch device, simply flicking backwards and forwards in that easy way that we’ve all become used to. What you do is you take that, and you fucking well leave it alone.

Alas, that’s not what tools like Gatsby offer. The latest post on their blog is called Why Gatsby is better with JavaScript:

But what about sites or pages where there is no client-side interactivity? Even for those pages, Gatsby offers performance benefits by including JavaScript.

I beg to differ.

(By the way, that same blog post also initially tried to equate the performance hit of client-side JavaScript with the performance hit of images. Andy explains why that’s disingenuous.)

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Browser defaults

I’ve been thinking about some of the default behaviours that are built into web browsers.

First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in example.com without specifying whether you’re looking for http://example.com or https://example.com.

Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?

Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.

But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.

Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.

Here’s another browser default that Rob mentioned recently: the viewport meta tag:

I thought I might be able to get away with omitting meta name="viewport". Apparently not! Maybe someday.

This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.

So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a viewport meta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.

But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume width=device-width?

The viewport meta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.

Liveblogging An Event Apart 2019

I was at An Event Apart in San Francisco last week. It was the last one of the year, and also my last conference of the year.

I managed to do a bit of liveblogging during the event. Combined with the liveblogging I did during the other two Events Apart that I attended this year—Seattle and Chicago—that makes a grand total of seventeen liveblogged presentations!

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean
  12. Making Research Count by Cyd Harrell
  13. Voice User Interface Design by Cheryl Platz
  14. Web Forms: Now You See Them, Now You Don’t! by Jason Grigsby
  15. The Weight of the WWWorld is Up to Us by Patty Toland
  16. The Mythology of Design Systems by Mina Markham
  17. The Technical Side of Design Systems by Brad Frost

For my part, I gave my talk on Going Offline. Time to retire that talk now.

Here’s what I wrote when I first gave the talk back in March at An Event Apart Seattle:

I was quite nervous about this talk. It’s very different from my usual fare. Usually I have some big sweeping arc of history, and lots of pretentious ideas joined together into some kind of narrative arc. But this talk needed to be more straightforward and practical. I wasn’t sure how well I would manage that brief.

I’m happy with how it turned out. I had quite a few people come up to me to say how much they appreciated how I was explaining the code. That was very nice to hear—I really wanted this talk to be approachable for everyone, even though it included plenty of JavaScript.

The dates for next year’s Events Apart have been announced, and I’ll be speaking at three of them:

The question is, do I attempt to deliver another practical code-based talk or do I go back to giving a high-level talk about ideas and principles? Or, if I really want to challenge myself, can I combine the two into one talk without making a Frankenstein’s monster?

Come and see me at An Event Apart in 2020 to find out.

Accessibility on The Session revisited

Earlier this year, I wrote about an accessibility issue I was having on The Session. Specifically, it was an issue with Ajax and pagination. But I managed to sort it out, and the lesson was very clear:

As is so often the case, the issue was with me trying to be too clever with ARIA, and the solution was to ease up on adding so many ARIA attributes.

Well, fast forward to the past few weeks, when I was contacted by one of the screen-reader users on The Session. There was, once again, a problem with the Ajax pagination, specifically with VoiceOver on iOS. The first page of results were read out just fine, but subsequent pages were not only never announced, the content was completely unavailable. The first page of results would’ve been included in the initial HTML, but the subsequent pages of results are injected with JavaScript (if JavaScript is available—otherwise it’s regular full-page refreshes all the way).

This pagination pattern shows up all over the site: lists of what’s new, search results, and more. I turned on VoiceOver and I was able to reproduce the problem straight away.

I started pulling apart my JavaScript looking for the problem. Was it something to do with how I was handling focus? I just couldn’t figure it out. And other parts of the site that used Ajax didn’t seem to be having the same problem. I was mystified.

Finally, I tracked down the problem, and it wasn’t in the JavaScript at all.

Wherever the pagination pattern appears, there are “previous” and “next” links, marked up with the appropriate rel="prev" and rel="next" attributes. Well, apparently past me thought it would be clever to add some ARIA attributes in there too. My thinking must’ve been something like this:

  • Those links control the area of the page with the search results.
  • That area of the page has an ID of “results”.
  • I should add aria-controls="results" to those links.

That was the problem …which is kind of weird, because VoiceOver isn’t supposed to have any support for aria-controls. Anyway, once I removed that attribute from the links, everything worked just fine.

Just as the solution last time was to remove the aria-atomic attribute on the updated area, the solution this time was to remove the aria-controls attribute on the links that trigger the update. Maybe this time I’ll learn my lesson: don’t mess with ARIA attributes you don’t understand.

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

CSS for all

There have been some great new CSS properties and values shipping in Firefox recently.

Miriam Suzanne explains the difference between the newer revert value and the older inherit, initial and unset values in a video on the Mozilla Developer channel:

display: revert;

In another video, Jen describes some new properties for styling underlines (on links, for example):

text-decoration-thickness:  0.1em;
text-decoration-color: red;
text-underline-offset: 0.2em;
text-decoration-skip-ink: auto;

Great stuff!

As far as I can tell, all of these properties are available to you regardless of whether you are serving your website over HTTP or over HTTPS. That may seem like an odd observation to make, but I invite you to cast your mind back to January 2018. That’s when the Mozilla Security Blog posted about moving to secure contexts everywhere:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

(emphasis mine)

Buzz Lightyear says to Woody: Secure contexts …secure contexts everywhere!

Despite that “effective immediately” clause, I haven’t observed any of the new CSS properties added in the past two years to be restricted to HTTPS. I’m glad about that. I wrote about this announcement at the time:

I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement.

There’s no official word from the Mozilla Security Blog about any change to their two-year old “effective immediately” policy, and the original blog post hasn’t been updated. Maybe we can all just pretend it never happened.

FF Conf 2019

Friday was FF Conf day here in Brighton. This was the eleventh(!) time that Remy and Julie have put on the event. It was, as ever, excellent.

It’s a conference that ticks all the boxes for me. For starters, it’s a single-track event. The more I attend conferences, the more convinced I am that multi-track events are a terrible waste of time for attendees (and a financially bad model for organisers). I know that sounds like a sweeping broad generalisation, but ask me about it next time we meet and I’ll go into more detail. For now, I just want to talk about this mercifully single-track conference.

FF Conf has built up a rock-solid reputation over the years. I think that’s down to how Remy curates it. He thinks about what he wants to know and learn more about, and then thinks about who to invite to speak on those topics. So every year is like a snapshot of Remy’s brain. By happy coincidence, a snapshot of Remy’s brain right now looks a lot like my own.

You could tell that Remy had grouped the talks together in themes. There was a performance-themed chunk right after lunch. There was a people-themed chunk in the morning. There was a creative-coding chunk at the end of the day. Nice work, DJ.

I think it was quite telling what wasn’t on the line-up. There were no talks about specific libraries or frameworks. For me, that was a blessed relief. The only technology-specific talk was Alice’s excellent talk on Git—a tool that’s useful no matter what you’re coding.

One of the reasons why I enjoyed the framework-free nature of the day is that most talks—and conferences—that revolve around libraries and frameworks are invariably focused on the developer experience. Think about it: next time you’re watching a talk about a framework or library, ask yourself how it impacts user experience.

At FF Conf, the focus was firmly on people. In the case of Laura’s barnstorming presentation, those people are end users (I’m constantly impressed by how calm and measured Laura remains even when talking about blood-boilingly bad behaviour from the tech industry). In the case of Amina’s talk, the people are junior developers. And for Sharon’s presentation, the people are everyone.

One of the most useful talks of the day was from Anna who took us on a guided tour of dev tools to identify performance improvements. I found it inspiring in a very literal sense—if I had my laptop with me, I think I would’ve opened it up there and then and started tinkering with my websites.

Harry also talked about performance, but at Remy’s request, it was more business focused. Specifically, it was focused on Harry’s consultancy business. I think this would’ve been the perfect talk for more of an “industry” event, whereas FF Conf is very much a community event: Harry’s semi-serious jibes about keeping his performance secrets under wraps didn’t quite match the generous tone of the rest of the line-up.

The final two talks from Charlotte and Suz were a perfect double whammy.

When I saw Charlotte speak at Material in Iceland last year, I wrote this aside in my blog post summary:

(Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)

I don’t think I can take credit for Charlotte being on the line-up, but I will take credit for saying she’d be the perfect fit.

And then Suz Hinton closed out the conference with this rallying cry that resonated perfectly with Laura’s talk:

Less mass-produced surveillance bullshit and more Harry Potter magic (please)!

I think that rallying cry could apply equally well to conferences, and I think FF Conf is a good example of that ethos in action.

Indy maps

Remember when I wrote about adding travel maps to my site at the recent Indie Web Camp Brighton? I must confess that the last line I wrote was an attempt to catch a fish from the river of the lazy web:

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

In the spirit of Cunningham’s Law, I was hoping that somebody was going to respond with “It’s totally possible to use Stamen’s watercolour tiles for static maps, dumbass—look!” (to which my response would have been “thank you very much!”).

Alas, no such response was forthcoming. The hoped-for schooling never forthcame.

Still, I couldn’t quite let go of the idea of using those lovely watercolour maps somewhere on my site. But I had decided that dynamic maps would have been overkill for my archive pages:

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles.

Then I had a thought. What if I keep the static maps on my archive pages, but make them clickable? Then, on the other end of that link, I can have the dynamic version. In other words, what if I had a separate URL just for the dynamic maps?

These seemed like a good plan to me, so while I was travelling by Eurostar—the only way to travel—back from the lovely city of Antwerp where I had been speaking at Full Stack Europe, I started hacking away on making the dynamic maps even more dynamic. After all, now that they were going to have their own pages, I could go all out with any fancy features I wanted.

I kept coming back to my original goal:

I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I found a plug-in for Leaflet.js that animates polylines—thanks, Iván! With a bit of wrangling, I was able to get it to animate between the lat/lon points of whichever archive section the map was in. Rather than have it play out automatically, I also added a control so that you can start and stop the animation. While I was at it, I decided to make that “play/pause” button do something else too. Ahem.

If you’d like to see the maps in action, click the “play” button on any of these maps:

You get the idea. It’s all very silly really. It’s right up there with the time I made my sparklines playable. But that’s kind of the point. It’s my website so I can do whatever I want with it, no matter how silly.

First of all, the research department for adactio.com (that’s me) came up with the idea. Then that had to be sold in to upper management (that’s me too). A team was spun up to handle design and development (consisting of me and me). Finally, the finished result went live thanks to the tireless efforts of the adactio.com ops group (that would be me). Any feedback should be directed at the marketing department (no idea who that is).

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.

Silent push for the web

After Indie Web Camp in Berlin last year, I wrote about Seb’s nifty demo of push without notifications:

While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Phil Nash left a comment on the Medium copy of my post explaining that Seb’s demo of using the Push API without showing a notification wouldn’t work for long:

The browsers allow a certain number of mistakes(?) before they start to show a generic notification to say that your site sent a push notification without showing a notification. I believe that after ~10 or so notifications, and that’s different between browsers, they run out of patience.

He also provided me with the name to describe what I’m after:

You’re looking for “silent push” as are many others.

Silent push is something that is possible in native apps. It isn’t (yet?) available on the web, presumably because of security concerns.

It’s an API that would ripe for abuse. I mean, just look at the mess we’ve made with APIs like notifications and geolocation. Sure, they require explicit user opt-in, but these opt-ins are seen so often that users are sick of seeing them. Silent push would be one more permission-based API to add to the stack of annoyances.

Still, I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Maybe there could be another layer of permissions. What if adding a site to your home screen was the first step? If a site is running on HTTPS, has a service worker, has a web app manifest, and has been added to the homescreen, maybe then and only then should it be allowed to prompt for permission to do silent push.

In other words, what if certain very powerful APIs were only available to progressive web apps that have successfully been added to the home screen?

Frankly, I’d be happy if the same permissions model applied to web notifications too, but I guess that ship has sailed.

Anyway, all this is pure conjecture on my part. As far as I know, silent push isn’t on the roadmap for any of the browser vendors right now. That’s fair enough. Although it does annoy me that native apps have this capability that web sites don’t.

It used to be that there was a long list of features that only native apps could do, but that list has grown shorter and shorter. The web’s hare is catching up to native’s tortoise.

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

Dark mode

I had a very productive time at Indie Web Camp Amsterdam. The format really lends itself to getting the most of a weekend—one day of discussions followed by one day of hands-on making and doing. You should definitely come along to Indie Web Camp Brighton on October 19th and 20th to experience it for yourself.

By the end of the “doing” day, I had something fun to demo—a dark mode for my website.

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Applying the dark mode styles is pretty straightforward in theory. You put the styles inside this media query:

@media (prefers-color-scheme: dark) {
...
}

Rather than over-riding every instance of a colour in my style sheet, I decided I’d do a little bit of refactoring first and switch to using CSS custom properties (or variables, if you will).

:root {
  --background-color: #fff;
  --text-color: #333;
  --link-color: #b52;
}
body {
  background-color: var(--background-color);
  color: var(--text-color);
}
a {
  color: var(--link-color);
}

Then I can over-ride the custom properties without having to touch the already-declared styles:

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: #111416
    --text-color: #ccc;
    --link-color: #f96;
  }
}

All in all, I have about a dozen custom properties for colours—variations for text, backgrounds, and interface elements like links and buttons.

By using custom properties and the prefers-color-scheme media query, I was 90% of the way there. But the devil is in the details.

I have SVGs of sparklines on my homepage. The SVG has a hard-coded colour value in the stroke attribute of the path element that draws the sparkline. Fortunately, this can be over-ridden in the style sheet:

svg.activity-sparkline path {
  stroke: var(--text-color);
}

The real challenge came with the images I use in the headers of my pages. They’re JPEGs with white corners on one side and white gradients on the other.

header images

I could make them PNGs to get transparency, but the file size would shoot up—they’re photographic images (with a little bit of scan-line treatment) so JPEGs (or WEBPs) are the better format. Then I realised I could use CSS to recreate the two effects:

  1. For the cut-out triangle in the top corner, there’s clip-path.
  2. For the gradient, there’s …gradients!
background-image: linear-gradient(
  to right,
  transparent 50%,
  var(—background-color) 100%
);

Oh, and I noticed that when I applied the clip-path for the corners, it had no effect in Safari. It turns out that after half a decade of support, it still only exists with -webkit prefix. That’s just ridiculous. At this point we should be burning vendor prefixes with fire. I can’t believe that Apple still ships standardised CSS properties that only work with a prefix.

In order to apply the CSS clip-path and gradient, I needed to save out the images again, this time without the effects baked in. I found the original Photoshop file I used to export the images. But I don’t have a copy of Photoshop any more. I haven’t had a copy of Photoshop since Adobe switched to their Mafia model of pricing. A quick bit of searching turned up Photopea, which is pretty much an entire recreation of Photoshop in the browser. I was able to open my old PSD file and re-export my images.

LEGO clone trooper Brighton bandstand Scaffolding Tokyo Florence

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop. I could probably do those raster scan lines with CSS if I were smart enough.

dark mode

This is what I demo’d at the end of Indie Web Camp Amsterdam, and I was pleased with the results. But fate had an extra bit of good timing in store for me.

The very next day at the View Source conference, Melanie Richards gave a fantastic talk called The Tailored Web: Effectively Honoring Visual Preferences (seriously, conference organisers, you want this talk on your line-up). It was packed with great insights and advice on impementing dark mode, like this little gem for adjusting images:

@media (prefers-color-scheme: dark) {
  img {
    filter: brightness(.8) contrast(1.2);
  }
}

Melanie also pointed out that you can indicate the presence of dark mode styles to browsers, although the mechanism is yet to shake out. You can do it in CSS:

:root {
  color-scheme: light dark;
}

But you can also do it in HTML:

That allows browsers to swap out replaced content; interface elements like form fields and dropdowns.

Oh, and one other addition I added after the fact was swapping out map imagery by using the picture element to point to darker map tiles:

<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
<img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">
</picture>

light map dark map

So now I’ve got a dark mode for my website. Admittedly, it’s for just one of the eight style sheets. I’ve decided that, while I’ll update my default styles at every opportunity, I’m going to preservethe other skins as they are, like the historical museum pieces they are.

If you’re on the latest version of iOS, go ahead and toggle the light and dark options in your system preferences to flip between this site’s colour schemes.

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Request mapping

The Request Map Generator is a terrific tool. It’s made by Simon Hearne and uses the WebPageTest API.

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

I was wondering… Wouldn’t it be great if this were built into browsers?

We already have a “Network” tab in our developer tools. The purpose of this tab is to show requests coming in. The browser already has all the information it needs to make a diagram of requests in the same that the request map generator does.

In Firefox, there’s a little clock icon in the bottom left corner of the “Network” tab. Clicking that shows a pie-chart view of requests. That’s useful, but I’d love it if there were the option to also see the connected circles that the request map generator shows.

Just a thought.