I enjoyed this documentary on legendary sound designer and editor Walter Murch. Kinda makes me want to rewatch The Conversation and The Godfather.
Thursday, April 2nd, 2020
Wednesday, April 1st, 2020
Here’s a BBC adaption of that J.G. Ballard short story I recorded. It certainly feels like a story for our time.
Tuesday, March 24th, 2020
I wrote yesterday about how messing about on your own website can be a welcome distraction. I did some tinkering with adactio.com on the weekend that you might be interested in.
Let me set the scene…
I’ve started recording and publishing a tune a day. I grab my mandolin, open up Quicktime and make a movie of me playing a jig, a reel, or some other type of Irish tune. I include a link to that tune on The Session and a screenshot of the sheet music for anyone who wants to play along. And I embed the short movie clip that I’ve uploaded to YouTube.
iframe has been delivered for nothing.
Meanwhile over on The Session, I’ve got a strategy for embedding YouTube videos that’s better for performance. Whenever somebody posts a link to a video on YouTube, the thumbnail of the video is embedded. Only when you click the thumbnail does that image get swapped out for the
iframe with the video.
That’s what I needed to do here on adactio.com.
That code checks to see if the URL is from a service that provides an oEmbed endpoint. A what-Embed? oEmbed!
In this case
http://example.com/oembed is the endpoint and
url is the value of a URL from that provider. Here’s a real life example from YouTube:
https://www.youtube.com/oembed is the endpoint and
url is the address of any video on YouTube.
You get back some JSON with a pre-defined list of values like
html payload is the markup for your embed code.
By default, YouTube sends back markup like this:
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
But now I want to use an
img instead of an
iframe. One of the other values returned is
thumbnail_url. That’s the URL of a thumbnail image that looks something like this:
In fact, once you know the ID of a YouTube video (the
?v= bit in a YouTube URL), you can figure out the path to multiple images of different sizes:
- 120 × 90:
- 320 × 180:
- 480 × 360:
- 1280 × 720:
(Although that last one—
maxresdefault.jpg—might not work for older videos.)
Okay, so I need to extract the ID from the YouTube URL. Here’s the PHP I use to do that:
parse_str(parse_url($url, PHP_URL_QUERY), $arguments);
$id = $arguments['v'];
Then I can put together some HTML like this:
<a class="videoimglink" href="'.$url.'">
<img width="100%" loading="lazy"
Over on The Session, I’m using
addEventListener but here on adactio.com I’m going to be dirty and listen for the event directly in the markup using the
When the link is clicked, I nuke the link and the image using
innerHTML. This injects an iframe where the link used to be (by updating the
innerHTML value of the link’s
But notice that I’m not using the default YouTube URL for the iframe. That would be:
Instead I’m swapping out the domain
I can’t remember where I first came across this undocumented parallel version of YouTube that has, yes, you guessed it, no cookies. It turns out that, not only is the default YouTube embed code bad for performance, it is—unsurprisingly—bad for privacy too. So the
youtube-nocookie.com domain can protect your site’s visitors from intrusive tracking. Pass it on.
Anyway, I’ve got the markup I want now:
<a class="videoimglink" href="https://www.youtube.com/watch?v=-eiqhVmSPcs"
<img width="100%" loading="lazy"
alt="The Banks Of Lough Gowna (jig) on mandolin"
The functionality is all there. But I want to style the embedded images to look more like playable videos. Time to break out some CSS (this is why I added the
videoimglink class to the YouTube link).
I’m going to use generated content to create a play button icon. Because I can’t use generated content on an
img element, I’m applying these styles to the containing
I was going to make an SVG but then I realised I could just be lazy and use the unicode character instead.
Right. Time to draw the rest of the fucking owl:
top: calc(50% - 5vmax);
left: calc(50% - 5vmax);
That’s a bunch of instructions for sizing and positioning. I’d explain it, but that would require me to understand it and frankly, I’m not entirely sure I do. But it works. I think.
With a translucent play icon positioned over the thumbnail, all that’s left is to add a
:hover style to adjust the opacity:
Wheresoever thou useth
:hover, thou shalt also useth
Okay. It’s good enough. Ship it!
If you embed YouTube videos on your site, and you’d like to make them more performant, check out this custom element that Paul made: Lite YouTube Embed. And here’s a clever technique that uses the
srcdoc attribute to get a similar result (but don’t forget to use the
Friday, March 20th, 2020
I have to admit, I don’t think I even knew of the existence of the
playsinline attribute on the
video element. Here, Chris runs through all the attributes you can put in there.
Wednesday, March 11th, 2020
A 1992 paper by Tim Berners-Lee, Robert Cailliau, and Jean-Françoise Groff.
The W3 project is not a research project, but a practical plan to implement a global information system.
A curl in every port
A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.
In the section on ports, Zach says:
The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.
Kimberly was spelunking down the original source code, when she came across this line in the
#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */
We showed this to Jean-François Groff, who worked on the original web technologies like
libwww, the forerunner to
libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).
Then he told us something interesting about the next line of code:
#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */
Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.
Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.
Those were the last four digits of his parents’ phone number.
This is a wonderful deep dive into all the parts of a URL:
There’s a lot of great DNS stuff about the
Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularily given how slow DNSSEC implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.
Tuesday, March 3rd, 2020
Telling the story of performance
At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.
The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.
It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.
There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.
When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.
Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.
You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.
Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.
Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.
This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.
Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.
SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.
If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.
The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.
Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right now—Crux.run balances that with the story of performance over time.
Measuring performance is important. Communicating the story of performance is equally important.
Monday, February 24th, 2020
Guidebooks to countries that no longer exist.
The first book will be on the Republic of Venice. There’ll be maps, infographics, and I suspect there’ll be an appearance by Aldus Manutius.
Our first guidebook tells the story of the Republic of Venice, la Serenissima, a 1000-year old state that disappeared in 1797.
Wednesday, February 19th, 2020
Have fun with this little machine, tweaking the parameters for generating a Joy Division/Jocelyn Bell-Burnell data visualisation.
The interface is quite delightful!
Thursday, January 23rd, 2020
You know that this online course from Scott is going to be excellent—get in there!
Wednesday, January 22nd, 2020
Tuesday, December 31st, 2019
We should think of our code, even our designs, as running for decades, and alter our work to match.
Tuesday, December 24th, 2019
Wednesday, December 11th, 2019
It was such a pleasure and an honour to watch Saron at work—she did an amazing job!
Tuesday, November 19th, 2019
I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.
I remember a few years back being really confused about
npm. Fortunately, someone who was working at
npm at the time came to Brighton for FFConf, so I asked them to explain it to me.
As I understood it,
npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?
Anyway, the mental model I had of
npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.
But then I saw people talking about using
It turns out that my confusion was somewhat warranted. The
npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.
I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.
On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.
Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:
I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.
The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.
In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.
Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.
That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.
Saturday, November 16th, 2019
Here are the slides from my opening keynote at Beyond Tellarrand on Thursday. They don’t make much sense out of context.
I’m really pleased with how this turned out. I wasn’t sure if anybody was going to be interested in the deep dive into history that I took for the first 15 or 20 minutes, but lots of people told me that they really enjoyed that part, so that makes me happy.
Monday, November 11th, 2019
The slides from Laura’s excellent talk at FF Conf on Friday.
Thursday, November 7th, 2019
Timelines of people, interfaces, technologies and more:
30 years of facts about the World Wide Web.