News from WWDC22: WebKit Features in Safari 16 Beta | WebKit
Good news and bad news…
The good news is that web notifications are coming to iOS—my number one wish!
The bad news is that it won’t happen until next year sometime.
Good news and bad news…
The good news is that web notifications are coming to iOS—my number one wish!
The bad news is that it won’t happen until next year sometime.
I’ve mentioned before that I don’t enable notifications on my phone. Text messages are the only exception. I don’t want to get notified if a new email arrives (I avoid email on my phone completely) and I certainly don’t want some social media app telling me somebody liked or faved something.
But the number one feature I’d like to see in Safari on iOS is web notifications.
It’s not for me personally, see. It’s because it’s the number one reason why people are choosing not to go all in progressive web apps.
Safari on iOS is the last holdout. But that equates to enough marketshare that many companies feel they can’t treat notifications as a progressive enhancement. While I may not agree with that decision myself, I get it.
When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.
And of course, unlike on your Mac, you don’t have the option of using a different browser on your iPhone. As long as mobile Safari doesn’t support web notifications, nothing on iOS can support web notifications.
I’ve got progressive web apps on the home screen of my phone that match their native equivalents feature-for-feature. Twitter. Instagram. They’re really good. In some ways they’re superior to the native apps; the Twitter website is much calmer, and the Instagram website has no advertising. But if I wanted to get notifications from any of those sites, I’d have to keep the native apps installed just for that one feature.
So in the spirit of complaining about web browsers in a productive way, I just want to throw this plea out there: Apple, please support web notifications in mobile Safari!
The good news is that web notifications on iOS might be on their way. Huzzah!
Alas, we’re reliant on Maximiliano’s detective work to even get a glimpse of a future feature like this. Apple has no public roadmap for Safari. There’s this status page on the Webkit blog but it’s incomplete—web notifications don’t appear at all. In any case, WebKit and Safari aren’t the same thing. The only way of knowing if a feature might be implemented in Safari is if it shows up in Safari Technology Preview, at which point it’s already pretty far along.
So while my number one feature request for mobile Safari is web notifications, a close second would be a public roadmap.
It only seems fair. If Apple devrels are asking us developers what features we’d like to see implemented—as they should!—then shouldn’t those same developers also be treated with enough respect to share a roadmap with them? There’s not much point in us asking for features if, unbeknownst to us, that feature is already being worked on.
But, like I said, my number one request remains: web notifications on iOS …please!
The title is somewhat misleading—currently it’s about native lazy-loading for Chrome, which is not (yet) the web.
I’ve just been adding loading="lazy"
to most of the iframes and many of the images on adactio.com, and it’s working a treat …in Chrome.
Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.
At Codebar the other night, I was doing an intro chat with some beginners. At one point I touched on DNS. This explanation is great for detailing what’s going on under the hood.
Tim takes a closer look at this Google Lite thing.
My first reaction to this was nervousness. Of all the companies to trust with intercepting and rerouting page requests, Google aren’t exactly squeeky clean, what with that whole surveillance business model of theirs.
Still, this ultimately seems to be a move to improve the end user experience, and I’m glad to see this clarification:
Lite pages are only triggered for extremely slow sites, so we encourage developers to measure how well their pages are currently performing over slow networks.
Lite pages as a badge of shame (much like AMP in my eyes).
Harry breaks down cache-control
headers into steps that even I can understand. I’ll be using this a reference for sure.
I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…
There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.
And that’s when I remember: browser extensions.
I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).
If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.
This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch
clauses of fetch
statements in your service worker.
So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.
From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”
I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.
I’m going to make a checklist. Then I’m going to use it …every time.
I remember Jason telling me about this weird service worker caching behaviour a little while back. This piece is a great bit of sleuthing in tracking down the root causes of this strange issue, followed up with a sensible solution.
There are some handy performance tips from Ben in this slide deck.
In this talk we’ll study how browsers determine which requests should be made, in what order, and what prevents the browser from rendering content quickly.
Jake’s blow-by-blow account of uncovering a serious browser vulnerability is fascinating. But if you don’t care for the technical details, skip ahead to to how different browser makers handled the issue—it’s very enlightening. (And if you do care for the technical details, make sure you click on the link to the PDF version of this post.)
In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog
or /articles
differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.
One of the ways to check what kind of request you’re dealing with is to see what’s in the accept
header. Here’s how I show the test for HTML pages:
if (request.headers.get('Accept').includes('text/html')) {
// Handle your page requests here.
}
So, logically enough, I show the same technique for detecting image requests:
if (request.headers.get('Accept').includes('image')) {
// Handle your image requests here.
}
That should catch any files that have image
in the request’s accept
header, like image/png
or image/jpeg
or image/svg+xml
and so on.
But there’s a problem. Both Safari and Firefox now use a much broader accept
header: */*
My if
statement evaluates to false
in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
// Handle your image requests here.
}
So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:
if (request.headers.get('Accept').includes('image'))
Swap it out for:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/))
And feel to add any other image file extensions (like webp
) in there too.
A thorough explanation of the history and inner workings of Cross-Origin Resource Sharing.
Like tales of a mythical sea beast, every developer has a story to tell about the day CORS seized upon one of their web requests, dragging it down into the inexorable depths, never to be seen again.
This is a really good use-case for cancelling fetch requests: making API calls while autocompleting in search.
Amber and I often have meta conversations about the nature of learning and teaching. We swap books and share ideas and experiences whenever we’re trying to learn something or trying to teach something. A topic that comes up again and again is the idea of “the curse of knowledge“—it’s the focus of Steven Pinker’s book The Sense Of Style. That’s when the author/teacher can’t remember what it’s like not to know something, which makes for a frustrating reading/learning experience.
This is one of the reasons why I encourage people to blog about stuff as they’re learning it; not when they’ve internalised it. The perspective that comes with being in the moment of figuring something out is invaluable to others. I honestly think that most explanatory books shouldn’t be written by experts—the “curse of knowledge” can become almost insurmountable.
I often think about this when I’m reading through the installation instructions for frameworks, libraries, and other web technologies. I find myself put off by documentation that assumes I’ve got a certain level of pre-existing knowledge. But now instead of letting it get me down, I use it as an opportunity to try and bridge that gap.
The brilliant Safia Abdalla wrote a post a while back called How do I get started contributing to open source?. I definitely don’t have the programming chops to contribute much to a codebase, but I thoroughly agree with Safia’s observation:
If you’re interested in contributing to open source to improve your communication and empathy skills, you’re definitely making the right call. A lot of open source tools could definitely benefit from improvements in the documentation, accessibility, and evangelism departments.
What really jumps out at me is when instructions use words like “simply” or “just”. I’m with Brad:
“Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.
But rather than letting that feeling overwhelm me, I now try to fix the text. Here are a few examples of changes I’ve suggested, usually via pull requests on Github repos:
They all have different codebases in different programming languages, but they’re all intended for humans, so having clear and kind documentation is a shared goal.
I like suggesting these kinds of changes. That initial feeling of frustration I get from reading the documentation gets turned into a warm fuzzy feeling from lending a helping hand.
This is a smart way to queue up POST submissions for later if the user is offline. It’s not as powerful as background sync (because it requires the user to revisit your site) but it’s a good fallback for browsers that support service workers but don’t yet support background sync
Ben takes us on a journey inside the mind of a browser (Chrome in this case). It’s all about priorities when it comes to the critical path.
This is a great explanatory piece from James Bridle in conjunction with Mozilla’s Webmaker. It’s intended for a younger audience, but its clear description of how web requests are resolved is pitch-perfect primer for anyone.
The web isn’t magic. It’s not some faraway place we just ‘connect’ to, but a vast and complex system of computers, connected by actual wires under the ground and the oceans. Every time you open a website, you’re visiting a place where that data is stored.
A terrific quiz about browser performance from Jake. I had the pleasure of watching him present this in a bar in Amsterdam—he was like a circus carny hoodwinking the assembled geeks.
I guarantee you won’t get all of this right, and that’s a good thing: you’ll learn something. If you do get them all right, either you are Jake or you are very, very sad.