Stories on the Road - UK 23 | Storyblok
I’ll be speaking at this free early evening event with Arisa Fukusaki and Cassie in Brighton on Monday, February 27th. Grab a ticket and come along for some pizza and nerdiness.
I’ll be speaking at this free early evening event with Arisa Fukusaki and Cassie in Brighton on Monday, February 27th. Grab a ticket and come along for some pizza and nerdiness.
I made the website for this year’s UX London by hand.
Well, that’s not entirely true. There’s exactly one build tool involved. I’m using Sergey to include global elements—the header and footer—something that’s still not possible in HTML.
So it’s minium viable static site generation rather than actual static files. It’s still very hands-on though and I enjoy that a lot; editing HTML and CSS directly without intermediary tools.
When I update the site, it’s usually to add a new speaker to the line-up (well, not any more now that the line up is complete). That involves marking up their bio and talk description. I also create a couple of different sized versions of their headshot to use with srcset
. And of course I write an alt
attribute to accompany that image.
By the way, Jake has an excellent article on writing alt
text that uses the specific example of a conference site. It raises some very thought-provoking questions.
I enjoy writing alt
text. I recently described how I updated my posting interface here on my own site to put a textarea
for alt
text front and centre for my notes with photos. Since then I’ve been enjoying the creative challenge of writing useful—but also evocative—alt
text.
Some recent examples:
A close-up of a microphone in a practice room. In the background, a guitar player tunes up and a bass player waits to start.
People sitting around in the dappled sunshine on the green grass in a park with the distinctive Indian-inspired architecture of the Brighton Pavilion in the background, all under a clear blue sky.
Looking down on the crispy browned duck leg contrasting with the white beans, all with pieces of green fried herbs scattered throughout.
But when I was writing the alt
text for the headshots on the UX London site, I started to feel a little disheartened. The more speakers were added to the line-up, the more I felt like I was repeating myself with the alt
text. After a while they all seemed to be some variation on “This person looking at the camera, smiling” with maybe some detail on their hair or clothing.
The beaming bearded face of Videha standing in front of the beautiful landscape of a riverbank.
Candi working on her laptop, looking at the camera with a smile.
Emma smiling against a yellow background. She’s wearing glasses and has long straight hair.
A monochrome portrait of John with a wry smile on his face, wearing a black turtleneck in the clichéd design tradition.
Laura smiling, wearing a chartreuse coloured top.
A profile shot of Adekunle wearing a jacket and baseball cap standing outside.
The more speakers were added to the line-up, the harder I found it not to repeat myself. I wondered if this was all going to sound very same-y to anyone hearing them read aloud.
But then I realised, “Wait …these are kind of same-y images.”
By the very nature of the images—headshots of speakers—there wasn’t ever going to be that much visual variation. The experience of a sighted person looking at a page full of speakers is that after a while the images kind of blend together. So if the alt
text also starts to sound a bit repetitive after a while, maybe that’s not such a bad thing. A screen reader user would be getting an equivalent experience.
That doesn’t mean it’s okay to have the same alt
text for each image—they are all still different. But after I had that realisation I stopped being too hard on myself if I couldn’t come up with a completely new and original way to write the alt
text.
And, I remind myself, writing alt
text is like any other kind of writing. The more you do it, the better you get.
A personal site, or a blog, is more than just a collection of writing. It’s a kind of place - something that feels like home among the streams. Home is a very strong mental model.
Goodreads lost my entire account last week. Nine years as a user, some 600 books and 250 carefully written reviews all deleted and unrecoverable. Their support has not been helpful. In 35 years of being online I’ve never encountered a company with such callous disregard for their users’ data.
Ouch! Lesson learned:
My plan now is to host my own blog-like collection of all my reading notes like Tom does.
The format of a Wikipedia page is used as the chilling delivery mechanism for this piece of speculative fiction. The distancing effect heightens the horror.
This is a good round-up of APIs you can use to decide if and how much JavaScript to load. I might look into using storage.estimate()
in service workers to figure out how much gets pre-cached.
There’s no browser support yet but that doesn’t mean we can’t start adding prefers-reduced-data
to our media queries today. I like the idea of switching between web fonts and system fonts.
I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.
When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.
Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:
Safari is the only browser on iOS devices.
I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.
You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.
It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.
If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.
It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.
It’s disgraceful.
It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.
Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.
It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:
An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.
Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.
On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll
and addEventListener
, I’ll first test that those methods are available.
if (!document.querySelectorAll || !window.addEventListener) {
// doesn't cut the mustard.
return;
}
I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.
I tracked the problem down to a function that was using some DOM methods—matches
and closest
—as well as the relatively recent JavaScript forEach
method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches
and closest
. And here’s the polyfill I’m using for forEach
.
Then I spotted the problem. I was using forEach
to loop through the results of querySelectorAll
. But the polyfill works on arrays. Technically, the output of querySelectorAll
isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.
So I added this polyfill from Chris Ferdinandi.
That did the trick. I checked with the people with those older iPads and everything is now working just fine.
For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.
I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.
Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.
Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.
Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:
Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:
I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.
I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.
Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link
elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!
The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure
elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio
elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.
I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a
element with an href
attribute.
I throw in one more attribute on that link. The download
attribute tells the browser that the URL in the href
attribute should be downloaded instead of visited. If you give a value for the download
attribute, it will over-ride the file name:
<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>
Or you can use it as a Boolean attribute without any value if you’re happy with the file name:
<a href="/files/nice-file-name.xyz" download>download</a>
There’s one catch though. The download
attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com
but my audio files are hosted on clearleft-audio.s3.amazonaws.com
—the download
attribute will be ignored and the mp3 files will play in the browser instead of downloading.
Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.
I added a file called _redirects
to the root of my project. It contains one line:
/download/* https://clearleft-audio.s3.amazonaws.com/podcast/:splat 200
That says that any URLs beginning with /download/
should redirect to clearleft-audio.s3.amazonaws.com/podcast/
. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat
. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.
Oh, and the 200
at the end is the status code: okay.
Now I can use this /download/
path in my link:
<a href="/download/season01episode06.mp3" download>Download mp3</a>
Because this URL on the same origin, the download
attribute works just fine.
I’ve been using Duck Duck Go for ages so I didn’t realise quite how much of a walled garden Google search has become.
41% of the first page of Google search results is taken up by Google products.
This is some excellent reporting. The data and methodology are entirely falsifiable so feel free to grab the code and replicate the results.
Note the fear with which publishers talk about Google (anonymously). It’s the same fear that app developers exhibit when talking about Apple (anonymously).
Ain’t centralisation something?
On AMP:
Google could have approached the “be better on mobile” problem, search optimization and revenue sharing any number of ways, obviously, but the one they’ve chosen and built out is the one that guarantees that either you let them middleman all of your traffic or they cut off your oxygen.
There’s also this observation, which is spot-on:
Google has managed to structure this surveillance-and-value-extraction machine entirely out of people who are convinced that they, personally, are doing good for the world. The stuff they’re working on isn’t that bad – we’ve got such beautiful intentions!
I had the great pleasure of visiting the Museum Plantin-Moretus in Antwerp last October. Their vast collection of woodblocks are available to dowload in high resolution (and they’re in the public domain).
14,000 examples of true craftmanship, drawings masterly cut in wood. We are supplying this impressive collection of woodcuts in high resolution. Feel free to browse as long as you like, get inspired and use your creativity.
Some solid research here. Turns out that using input type=”text” inputmode=”numeric” pattern="[0-9]*"
is probably a better bet than using input type="number"
.
I can’t decide if this is industrial sabotage or political protest. Either way, I like it.
99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic
A lovely little bit of urban cartography.
Books in the public domain, lovingly designed and typeset, available in multiple formats for free. Great works of fiction from Austen, Conrad, Stevenson, Wells, Hardy, Doyle, and Dickens, along with classics of non-fiction like Darwin’s The Origin of Species and Shackleton’s South!
There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.
Here’s the problem it solves…
If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.
It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.
Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.
To enable navigation preloads, call the enable()
method on registration.navigationPreload
during the activate
event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload
exists in this browser:
if (registration.navigationPreload) {
addEventListener('activate', activateEvent => {
activateEvent.waitUntil(
registration.navigationPreload.enable()
);
});
}
If you’ve already got event listeners on the activate
event, that’s absolutely fine: addEventListener
isn’t exclusive—you can use it to assign multiple tasks to the same event.
Now you need to make use of navigation preloads when you’re responding to fetch
events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.
Let’s say your current strategy for handling page requests looks like this:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
// maybe cache this response for later here.
return responseFromFetch;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
});
That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.
It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch
request. Otherwise you’re making two requests for the same file.
To find out if a preload request is underway, you can check for the existence of the preloadResponse
promise, which will be made available as a property of the fetch event you’re handling:
fetchEvent.preloadResponse
If that exists, you’ll want to use it instead of fetch(request)
.
if (fetchEvent.preloadResponse) {
// do something with fetchEvent.preloadResponse
} else {
// do something with fetch(request)
}
You could structure your code like this:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
if (fetchEvent.preloadResponse) {
fetchEvent.respondWith(
fetchEvent.preloadResponse
.then( responseFromPreload => {
// maybe cache this response for later here.
return responseFromPreload;
})
.catch( preloadError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
} else {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
// maybe cache this response for later here.
return responseFromFetch;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
}
});
But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request)
or from fetchEvent.preloadResponse
. It would be better if you could minimise the amount of duplication.
One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve
. If a preload is underway, we’ll assign it to that variable:
let retrieve;
if (fetchEvent.preloadResponse) {
retrieve = fetchEvent.preloadResponse;
}
If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve
variable:
let retrieve;
if (fetchEvent.preloadResponse) {
retrieve = fetchEvent.preloadResponse;
} else {
retrieve = fetch(request);
}
If you like, you can squash that into a ternary operator:
const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
Use whichever syntax you find more readable.
Now you can apply the same logic, regardless of whether retrieve
is a preload navigation or a fetch request:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
fetchEvent.respondWith(
retrieve
.then( responseFromRetrieve => {
// maybe cache this response for later here.
return responseFromRetrieve;
})
.catch( fetchError => {
return caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
});
})
);
}
});
I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.
Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.
Jeff Posnick made this point in his write-up of bringing service workers to Google search:
Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.
Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!
By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.
I’m telling you this stuff is often too important and worthy to be owned by an algorithm and lost in the stream.
I really like getting Paul’s insights into building his Bradshaw’s Guide project. Here he shares his process for typography, images and geolocation.
At the 14 minute mark I had to deal with an obstreperous member of the audience. He wasn’t heckling exactly …he just had a very bad experience with web components, and I think my talk was triggering for him.