Journal

2530 sparkline

Wednesday, December 19th, 2018

Vienna

Back in December 1997, when Jessica and I were living in Freiburg, Dan came to visit. Together, we boarded a train east to Vienna. There we would ring in the new year to the sounds of the Salonorchester Alhambra, the band that Dan’s brother Andrew was playing in (and the band that would later be my first paying client when I made their website—I’ve still got the files lying around somewhere).

That was a fun New Year’s ball …although I remember my mortification when we went for gulash beforehand and I got a drop on the pristine tux that I had borrowed from Andrew.

My other memory of that trip was going to the Kunsthistorisches Museum to see the amazing Bruegel collection. It’s hard to imagine that ever being topped, but then this year, they put together a “once in a lifetime” collection, gathering even more Bruegel masterpieces together in Vienna.

Jessica got the crazy idea in her head that we could go there. In a day.

Looking at the flights, it turned out to be not such a crazy idea after all. Sure, it meant an early start, but it was doable. We booked our museum tickets, and then we booked plane tickets.

That’s how we ended up going to Vienna for the day this past Monday. It was maybe more time than I’d normally like to spend in airports in a 24 hour period, but it was fun. We landed, went into town for a wiener schnitzel, and then it was off to the museum for an afternoon of medieval masterpieces. Hunters in the Snow, the Tower of Babel, and a newly restored Triumph of Death sent from the Prado were just some of the highlights.

There’s a website to accompany the exhibition called Inside Bruegel. You can zoom on each painting to see the incredible detail. You can even compare the infrared and x-ray views. Dive in and explore the world of Pieter Bruegel the Elder.

The Battle between Carnival and Lent

Sunday, December 16th, 2018

Browsers

Microsoft’s Edge browser is going to switch its rendering engine over to Chromium.

I am deflated and disappointed.

There’s just no sugar-coating this. I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

Very soon, the vast majority of browsers will have an engine that’s either Blink or its cousin, WebKit. That may seem like good news for developers when it comes to testing, but trust me, it’s a sucky situation of innovation and agreement. Instead of a diverse browser ecosystem, we’re going to end up with incest and inbreeding.

There’s one shining exception though. Firefox. That browser was originally created to combat the seemingly unstoppable monopolistic power of Internet Explorer. Now that Microsoft are no longer in the rendering engine game, Firefox is once again the only thing standing in the way of a complete monopoly.

I’ve been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it’s just about personal choice—it feels part of something bigger; it’s about the shape of the web we want.

Jeffrey wrote that browser diversity starts with us:

The health of Firefox is critical now that Chromium will be the web’s de facto rendering engine.

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Andy Bell also writes about browser diversity:

I’ll say it bluntly: we must support Firefox. We can’t, as a community allow this browser engine monopoly. We must use Firefox as our main dev browsers; we must encourage our friends and families to use it, too.

Yes, it’s not perfect, nor are Mozilla, but we can help them to develop and grow by using Firefox and reporting issues that we find. If we just use and build for Chromium, which is looking likely (cough Internet Explorer monopoly cough), then Firefox will fall away and we will then have just one major engine left. I don’t ever want to see that.

Uncle Dave says:

If the idea of a Google-driven Web is of concern to you, then I’d encourage you to use Firefox. And don’t be a passive consumer; blog, tweet, and speak about its killer features. I’ll start: Firefox’s CSS Grid, Flexbox, and Variable Font tools are the best in the business.

Mozilla themselves came out all guns blazing when they said Goodbye, EdgeHTML:

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

Tim describes the situation as risking a homogeneous web:

I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.

Andre Alves Garzia writes that while we Blink, we lose the web:

Losing engines is like losing languages. People may wish that everyone spoke the same language, they may claim it leads to easier understanding, but what people fail to consider is that this leads to losing all the culture and way of thought that that language produced. If you are a Web developer smiling and happy that Microsoft might be adopting Chrome, and this will make your work easier because it will be one less browser to test, don’t be! You’re trading convenience for diversity.

I like that analogy with language death. If you prefer biological analogies, it’s worth revisiting this fantastic post by Rachel back in August—before any of us knew about Microsoft’s decision—all about the ecological impact of browser diversity:

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

That post is a great history lesson, documenting how things can change, and how decisions can have far-reaching unintended consequences.

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Monday, December 3rd, 2018

Programming CSS

There’s a worrying tendency for “real” programmers look down their noses at CSS. It’s just a declarative language, they point out, not a fully-featured programming language. Heck, it isn’t even a scripting language.

That may be true, but that doesn’t mean that CSS isn’t powerful. It’s just powerful in different ways to traditional languages.

Take CSS selectors, for example. At the most basic level, they work like conditional statments. Here’s a standard if statement:

if (condition) {
// code here
}

The condition needs to evaluate to true in order for the code in the curly braces to be executed. Sound familiar?

condition {
// styles here
}

That’s a very simple mapping, but what if the conditional statement is more complicated?

if (condition1 && condition2) {
// code here
}

Well, that’s what the decendant selector does:

condition1 condition2 {
// styles here
}

In fact, we can get even more specific than that by using the child combinator, the sibling combinator, and the adjacent sibling combinator:

  • condition1 > condition2
  • condition1 ~ condition2
  • condition2 + condition2

AND is just one part of Boolean logic. There’s also OR:

if (condition1 || condition2) {
// code here
}

In CSS, we use commas:

condition1, condition2 {
// styles here
}

We’ve even got the :not() pseudo-class to complete the set of Boolean possibilities. Once you add quantity queries into the mix, made possible by :nth-child and its ilk, CSS starts to look Turing complete. I’ve seen people build state machines using the adjacent sibling combinator and the :checked pseudo-class.

Anyway, my point here is that CSS selectors are really powerful. And yet, quite often we deliberately choose not to use that power. The entire raison d’être for OOCSS, BEM, and Smacss is to deliberately limit the power of selectors, restricting them to class selectors only.

On the face of it, this might seem like an odd choice. After all, we wouldn’t deliberately limit ourselves to a subset of a programming language, would we?

We would and we do. That’s what templating languages are for. Whether it’s PHP’s Smarty or Twig, or JavaScript’s Mustache, Nunjucks, or Handlebars, they all work by providing a deliberately small subset of features. Some pride themselves on being logic-less. If you find yourself trying to do something that the templating language doesn’t provide, that’s a good sign that you shouldn’t be trying to do it in the template at all; it should be in the controller.

So templating languages exist to enforce simplicity and ensure that the complexity happens somewhere else. It’s a similar story with BEM et al. If you find you can’t select something in the CSS, that’s a sign that you probably need to add another class name to the HTML. The complexity is confined to the markup in order to keep the CSS more straightforward, modular, and maintainable.

But let’s not forget that that’s a choice. It’s not that CSS in inherently incapable of executing complex conditions. Quite the opposite. It’s precisely because CSS selectors (and the cascade) are so powerful that we choose to put guard rails in place.

Monday, November 26th, 2018

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

Sunday, November 25th, 2018

Food and music

Going from Iceland to Greece in a day gave me a mild bit of currency exchange culture shock. Iceland is crazy expensive, especially given the self-immolation of the pound right now. Greece is remarkably cheap. You can eat like a king for unreasonably reasonable prices.

For me, food is one of the great pleasures in life. Trying new kinds of food is one of my primary motivators for travelling. It’s fascinating to me to see the differences—and similarities—across cultures. In many ways, food is like a universal language, but a language that we all speak in different dialects.

Herring. A feast of lamb.

It’s a similar story with music. There’s a fundamental universality in music across cultures, but there’s also a vast gulf of differences.

On my first night in Reykjavik, I wound up at an Irish music session. I know, I know—I sound like such a cliché, going to a foreign country and immediately seeking out something familiar. But I had been invited along by a kind soul who got in touch through The Session after I posted my travel plans there. Luckily for me, there was a brand new session starting that very evening. I didn’t have an instrument, but someone very kindly lent me their banjo and I had a thoroughly enjoyable time playing along with the jigs and reels.

As an added bonus—and you really don’t get to hear this at most trad sessions—there was even a bit of Icelandic singing courtesy of Bára Grímsdóttir. I snatched a little sample of it.

A few nights later I was in a quiet, somewhat smokey tavern in Thessaloniki. There was no Irish music to be found, but the rembetika music played on gorgeous bouzoukis and baglamas was in full flow.

Saturday, November 24th, 2018

Conferencing

I just wrapped up my last speaking gig of the year. It came at the end of a streak of attending European conferences without speaking at any of them—quite a nice feeling!

I already mentioned that I was in Berlin for the (excellent) Indie Web Camp. That was immediately followed by a one-day Accessibility Club conference. It was really, really good.

I have to say, I was initially apprehensive when I saw the sheer amount of speakers on the schedule. I was worried that my attention couldn’t handle it all. But the talks were a mixture of shorter 20 minute presentations, and a few longer 40 minute presentations. That worked really well—the day fairly zipped by. And just in case you think it would hard to have an entire day devoted to accessibility, the breadth of talks was remarkably diverse. Hats off to a well-organised and well-executed event!

The next day was Beyond Tellerrand. This has my favourite conference format: two days; one track; curated; a mix of design and development (see also An Event Apart and Smashing Conference). Marc’s love and care shines through every pore of the event. I thoroughly enjoyed the talks, and the hanging out with lovely people.

Alas, I had to miss the final afternoon of Beyond Tellerrand to head home to Brighton. I needed to get back for FF Conf. It was excellent, as always. Remy and Julie really give it their all. Remy even stepped in to give a (great) talk himself this year, when a speaker couldn’t make it.

A week later, I went to Iceland for Material. I really enjoyed last year’s inaugural event, and if anything, this year’s topped it. I just love how eclectic and different the talks are, and yet it all weirdly hangs together in a thoughtfully curated way. (Oh, and Remy, when you start to put together the line-up for next year’s FF Conf, be sure to check out Charlotte Dann—her talk at Material was the perfect mix of code and creativity.)

As well as sharing an organiser with Accessibility Club, Material had a similar format—keynote talks from invited presenters, interspersed with shorter talks by locals. The mix was great. I won’t even try to describe the range of topics. I’m not sure I could explain how a conference podium morphed into a bar at the end of one of the talks. I think the best description of Material would be to say it’s like the inside of Brian’s head. In a good way.

I was supposed to be back in Brighton for one night after Material, but the stormy weather kept myself and Jessica in Reykjavik for an extra night. Thanks to Brian’s hospitality, we had a bed for the night.

There followed a long travel day as we made our way from Reykjavik to Gatwick, and then straight on to Thessaloniki, where we spent five days even though we only had the clothes we packed for the brief trip to Iceland. (Yes, we went shopping.)

I was there to speak at Voxxed Days. These events happen in various locations around the world, and just a few weeks ago, I spoke at the one in Bristol. It was …different.

After experiencing so many lovingly crafted events—Accessibility Club, Beyond Tellerrand, FF Conf, and Material—I’m afraid that Voxxed Days Thessaloniki was quite a comedown. It’s not that it was corporate per se—I believe it’s organised by developers for developers—but it felt like it was for people who worked in corporate environments. There were multiple tracks (I’m really not a fan of that), and some great speakers on the line-up like Stephanie and Simona, but the atmosphere felt kind of grim in a David Brentian sort of way. It probably wasn’t helped by the cheeky chappie of an MC who referred to one of the speakers as “darling.”

Anyway, I spoke first thing on the first day and I didn’t end up sticking around long. Normally I don’t speak and run, but I didn’t fancy the vibe of the exhibitor hall with its booth-babesque sales teams. Voxxed Days doesn’t pay its speakers so I didn’t feel any great obligation to hang around. The magnificent food and rembetika music of Thessaloniki was calling.

I just got back from Greece, and that wraps up my conference attending (and speaking) for 2018. I’ve already got a couple of events lined up for 2019. I’m delighted to be speaking at the return of Colly’s New Adventures conference. I’m less delighted about preparing a brand new talk I promised—I’m really feeling the pressure to deliver the goods at such an auspicious event with an intimidatingly superb line-up of speakers.

I’m also going to be preparing a different all-new talk for An Event Apart Seattle in March. For once, I’m going to try to make it somewhat practical and talk about service workers. If you know of any other events that might want a presentation like that in 2019, drop me a line.

Perhaps I will see you in Nottingham or in Seattle. If you’re planning on going to New Adventures, use the discount code ADACTIO10 to get 10% of the price of the conference or workshop ticket. If you’re planning on going to An Event Apart, use the discount code AEAKEITH for $100 off.

Tuesday, November 13th, 2018

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Sunday, November 11th, 2018

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Saturday, November 10th, 2018

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

Friday, October 26th, 2018

Service workers and videos in Safari

Alright, so I’ve already talked about some gotchas when debugging service worker issues. But what if you don’t even realise the problem has anything to do with your service worker?

This is not a hypothetical situation. I encountered this very thing myself. Gather ‘round the campfire, children…

One of the latest case studies on the Clearleft site is a nice write-up by Luke of designing a mobile app for Virgin Holidays. The case study includes a lovely video that demonstrates the log-in flow. I implemented that using a video element (with a poster image). Nice and straightforward. Super easy. All good.

But I hadn’t done my due diligence in browser testing (I guess I didn’t even think of it in this case). Hana informed me that the video wasn’t working at all in Safari. The poster image appeared just fine, but when you clicked on it, the video didn’t load.

I ducked, ducked, and went, uncovering what appeared to be the root of the problem. It seems that Safari is fussy about having servers support something called “byte-range requests”.

I had put the video in question on an Amazon S3 server. I came to the conclusion that S3 mustn’t support these kinds of headers correctly, or something.

Now I had a diagnosis. The next step was figuring out a solution. I thought I might have to move the video off of S3 and onto a server that I could configure a bit more.

Luckily, I never got ‘round to even starting that process. That’s good. Because it turns out that my diagnosis was completely wrong.

I came across a recent post by Phil Nash called Service workers: beware Safari’s range request. The title immediately grabbed my attention. Safari: yes! Video: yes! But service workers …wait a minute!

There’s a section in Phil’s post entitled “Diagnosing the problem”, in which he says:

I first thought it could have something to do with the CDN I’m using. There were some false positives regarding streaming video through a CDN that resulted in some extra research that was ultimately fruitless.

That described my situation exactly. Except Phil went further and nailed down the real cause of the problem:

Nginx was serving correct responses to Range requests. So was the CDN. The only other problem? The service worker. And this broke the video in Safari.

Doh! I hadn’t even thought about service workers!

Phil came up with a solution, and he has kindly shared his code.

I decided to go for a dumber solution:

if ( request.url.match(/\.(mp4)$/) ) {
  return;
}

That tells the service worker to just step out of the way when it comes to video requests. Now the video plays just fine in Safari. It’s a bit of a shame, because I’m kind of penalising all browsers for Safari’s bug, but the Clearleft site isn’t using much video at all, and in any case, it might be good not to fill up the cache with large video files.

But what’s more important than any particular solution is correctly identifying the problem. I’m quite sure I never would’ve been able to fix this issue if Phil hadn’t gone to the trouble of sharing his experience. I’m very, very grateful that he did.

That’s the bigger lesson here: if you solve a problem—even if you think it’s hardly worth mentioning—please, please share your solution. It could make all the difference for someone out there.