News from WWDC22: WebKit Features in Safari 16 Beta | WebKit
Good news and bad news…
The good news is that web notifications are coming to iOS—my number one wish!
The bad news is that it won’t happen until next year sometime.
Good news and bad news…
The good news is that web notifications are coming to iOS—my number one wish!
The bad news is that it won’t happen until next year sometime.
I’ve mentioned before that I don’t enable notifications on my phone. Text messages are the only exception. I don’t want to get notified if a new email arrives (I avoid email on my phone completely) and I certainly don’t want some social media app telling me somebody liked or faved something.
But the number one feature I’d like to see in Safari on iOS is web notifications.
It’s not for me personally, see. It’s because it’s the number one reason why people are choosing not to go all in progressive web apps.
Safari on iOS is the last holdout. But that equates to enough marketshare that many companies feel they can’t treat notifications as a progressive enhancement. While I may not agree with that decision myself, I get it.
When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.
And of course, unlike on your Mac, you don’t have the option of using a different browser on your iPhone. As long as mobile Safari doesn’t support web notifications, nothing on iOS can support web notifications.
I’ve got progressive web apps on the home screen of my phone that match their native equivalents feature-for-feature. Twitter. Instagram. They’re really good. In some ways they’re superior to the native apps; the Twitter website is much calmer, and the Instagram website has no advertising. But if I wanted to get notifications from any of those sites, I’d have to keep the native apps installed just for that one feature.
So in the spirit of complaining about web browsers in a productive way, I just want to throw this plea out there: Apple, please support web notifications in mobile Safari!
The good news is that web notifications on iOS might be on their way. Huzzah!
Alas, we’re reliant on Maximiliano’s detective work to even get a glimpse of a future feature like this. Apple has no public roadmap for Safari. There’s this status page on the Webkit blog but it’s incomplete—web notifications don’t appear at all. In any case, WebKit and Safari aren’t the same thing. The only way of knowing if a feature might be implemented in Safari is if it shows up in Safari Technology Preview, at which point it’s already pretty far along.
So while my number one feature request for mobile Safari is web notifications, a close second would be a public roadmap.
It only seems fair. If Apple devrels are asking us developers what features we’d like to see implemented—as they should!—then shouldn’t those same developers also be treated with enough respect to share a roadmap with them? There’s not much point in us asking for features if, unbeknownst to us, that feature is already being worked on.
But, like I said, my number one request remains: web notifications on iOS …please!
At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.
The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.
It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.
There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.
When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.
Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.
You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.
Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.
Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.
This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.
Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.
SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.
If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.
The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.
Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right now—Crux.run balances that with the story of performance over time.
Measuring performance is important. Communicating the story of performance is equally important.
The Request Map Generator is a terrific tool. It’s made by Simon Hearne and uses the WebPageTest API.
You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.
I was wondering… Wouldn’t it be great if this were built into browsers?
We already have a “Network” tab in our developer tools. The purpose of this tab is to show requests coming in. The browser already has all the information it needs to make a diagram of requests in the same that the request map generator does.
In Firefox, there’s a little clock icon in the bottom left corner of the “Network” tab. Clicking that shows a pie-chart view of requests. That’s useful, but I’d love it if there were the option to also see the connected circles that the request map generator shows.
Just a thought.
The title is somewhat misleading—currently it’s about native lazy-loading for Chrome, which is not (yet) the web.
I’ve just been adding loading="lazy"
to most of the iframes and many of the images on adactio.com, and it’s working a treat …in Chrome.
Less than 24 hours after I put the call out for a solution to this gnarly service worker challenge, Trys has come up with a solution.
At Codebar the other night, I was doing an intro chat with some beginners. At one point I touched on DNS. This explanation is great for detailing what’s going on under the hood.
Tim takes a closer look at this Google Lite thing.
My first reaction to this was nervousness. Of all the companies to trust with intercepting and rerouting page requests, Google aren’t exactly squeeky clean, what with that whole surveillance business model of theirs.
Still, this ultimately seems to be a move to improve the end user experience, and I’m glad to see this clarification:
Lite pages are only triggered for extremely slow sites, so we encourage developers to measure how well their pages are currently performing over slow networks.
Lite pages as a badge of shame (much like AMP in my eyes).
This is a really clever technique from Scott that he unveiled at An Event Apart in Seattle. It uses a header sent by a service worker to distinguish between returning and new visitors—much neater than relying on a cookie. I’ve updated my service worker on The Session to use this technique now.
Harry breaks down cache-control
headers into steps that even I can understand. I’ll be using this a reference for sure.
I quite enjoy a good bug hunt. Just yesterday, myself and Cassie were doing some bugfixing together. As always, the first step was to try to reproduce the problem and then isolate it. Which reminds me…
There’ve been a few occasions when I’ve been trying to debug service worker issues. The problem is rarely in reproducing the issue—it’s isolating the cause that can be frustrating. I try changing a bit of code here, and a bit of code there, in an attempt to zero in on the problem, butwith no luck. Before long, I’m tearing my hair out staring at code that appears to have nothing wrong with it.
And that’s when I remember: browser extensions.
I’m currently using Firefox as my browser, and I have extensions installed to stop tracking and surveillance (these technologies are usually referred to as “ad blockers”, but that’s a bit of a misnomer—the issue isn’t with the ads; it’s with the invasive tracking).
If you think about how a service worker does its magic, it’s as if it’s sitting in the browser, waiting to intercept any requests to a particular domain. It’s like the service worker is the first port of call for any requests the browser makes. But then you add a browser extension. The browser extension is also waiting to intercept certain network requests. Now the extension is the first port of call, and the service worker is relegated to be next in line.
This, apparently, can cause issues (presumably depending on how the browser extension has been coded). In some situations, network requests that should work just fine start to fail, executing the catch
clauses of fetch
statements in your service worker.
So if you’ve been trying to debug a service worker issue, and you can’t seem to figure out what the problem might be, it’s not necessarily an issue with your code, or even an issue with the browser.
From now on when I’m troubleshooting service worker quirks, I’m going to introduce a step zero, before I even start reproducing or isolating the bug. I’m going to ask myself, “Are there any browser extensions installed?”
I realise that sounds as basic as asking “Are you sure the computer is switched on?” but there’s nothing wrong with having a checklist of basic questions to ask before moving on to the more complicated task of debugging.
I’m going to make a checklist. Then I’m going to use it …every time.
I remember Jason telling me about this weird service worker caching behaviour a little while back. This piece is a great bit of sleuthing in tracking down the root causes of this strange issue, followed up with a sensible solution.
There are some handy performance tips from Ben in this slide deck.
In this talk we’ll study how browsers determine which requests should be made, in what order, and what prevents the browser from rendering content quickly.
Jake’s blow-by-blow account of uncovering a serious browser vulnerability is fascinating. But if you don’t care for the technical details, skip ahead to to how different browser makers handled the issue—it’s very enlightening. (And if you do care for the technical details, make sure you click on the link to the PDF version of this post.)
In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog
or /articles
differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.
One of the ways to check what kind of request you’re dealing with is to see what’s in the accept
header. Here’s how I show the test for HTML pages:
if (request.headers.get('Accept').includes('text/html')) {
// Handle your page requests here.
}
So, logically enough, I show the same technique for detecting image requests:
if (request.headers.get('Accept').includes('image')) {
// Handle your image requests here.
}
That should catch any files that have image
in the request’s accept
header, like image/png
or image/jpeg
or image/svg+xml
and so on.
But there’s a problem. Both Safari and Firefox now use a much broader accept
header: */*
My if
statement evaluates to false
in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
// Handle your image requests here.
}
So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:
if (request.headers.get('Accept').includes('image'))
Swap it out for:
if (request.url.match(/\.(jpe?g|png|gif|svg)$/))
And feel to add any other image file extensions (like webp
) in there too.
A thorough explanation of the history and inner workings of Cross-Origin Resource Sharing.
Like tales of a mythical sea beast, every developer has a story to tell about the day CORS seized upon one of their web requests, dragging it down into the inexorable depths, never to be seen again.
Harry describes the process he uses for auditing the effects of third-party scripts. He uses the excellent Request Map which was mentioned multiple times at the Delta V conference.
The focus here is on performance, but these tools are equally useful for shining a light on just how bad the situation is with online surveillance and tracking.
This is a really good use-case for cancelling fetch requests: making API calls while autocompleting in search.
Everything old is new again—sometimes the age-old technique of using a 1x1 pixel image to log requests is still the only way to get certain metrics.
While tracking pixels are far from a new idea, there are creative ways in which we can use them to collect data useful to developers. Once the data is gathered, we can begin to make much more informed decisions about how we work.