Traditional blogs might have swung out of favor, as we all discovered the benefits of social media and aggregating platforms, but we think they’re about to swing back in style, as we all discover the real costs and problems brought by such centralization.
Wednesday, January 16th, 2019
Friday, January 11th, 2019
How did I miss this great post from 2016 by one of my favourite people‽ It’s even more more relevant today.
A fellow URL fetishest!
I love me a well-designed URL scheme—here’s four interesting approaches.
URLs are consumed by machines, but they should be designed for humans. If your URL thinking stops at “uniquely identifies a page” and “good for SEO”, you’re missing out.
Wednesday, January 9th, 2019
When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.
Monday, January 7th, 2019
In which Matthew disects a multiple choice quiz that uses CSS to do some clever logic, using the
:checked pseudo-class and
The start of a new year is the traditional time for making resolutions. I’ve done it in the past. Now I’m not sure it’s such a good idea.
Think about it. It’s January. The middle of winter. It’s cold outside. The days are short. The only seasonal foods available are root vegetables and brassicas. Considering this lack of sunlight and fruit, it seems inadvisable to try to also deny yourself the intake of sugar, alcohol, meat, carbohydrates or gluten. You’re playing with a stacked deck. And then when inevitably, in the depths of winter, you cave in and pour yourself a glass of wine or indulge in a piece of cake, you now have the added weight of guilt on your shoulders to carry through the neverending winter nights.
Of course not all resolutions involve the abnegation of material pleasures. Many a new year’s promise involves a renewed commitment to work, exercise, or culture-vulching. But again, is this really the best time of year to do that? Given the weather, are you really in the best frame of mind to tackle such a tall order?
No, I don’t think I’ll be making any new year’s resolutions. If anything, this is the time of year when I won’t feel bad about having a pint of ale or a comforting stew. It’s also the time of year when I’m going to cut myself more slack if I’m not exercising diligently or working hard. Let’s face it, just making it through these months intact should be achievement enough.
If I were to make a resolution, it would only be that, come summertime, I’ll take stock and maybe make a commitment to cut down on some guilty pleasure or increase some noble activity then. A midsummer’s resolution, if you will.
Until then, I’ll be cosying up and indulging in any bodily comforts I crave. My resolve to do that is strong.
Friday, December 28th, 2018
Well, this an interesting format experiment—the latest Black Mirror just dropped, and it’s a PDF.
Wednesday, December 19th, 2018
Friday, December 14th, 2018
GitHub - GoogleChromeLabs/quicklink: ⚡️Faster subsequent page-loads by prefetching in-viewport links during idle time
This looks like a very handle little performance-enhancing script: it attempts to prefetch some links, but in a responsible way. It won’t do any prefetching on slow connections or where data saving is enabled, and it only prefetches when the browser is idle.
Monday, December 3rd, 2018
Here’s the talk I gave recently about indie web building blocks.
There’s fifteen minutes of Q&A starting around the 35 minute mark. People asked some great questions!
Monday, November 26th, 2018
Prototypes and production
When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.
We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.
What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right
class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.
So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.
Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.
When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.
Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.
Of course it should go without saying that you should never, ever release prototype code into production.
Heydon recently highlighted an article that offered this tip for aspiring web developers:
As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like
spanand how they differ from block ones like
That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.
Thursday, November 22nd, 2018
I just binge-listened to the six episodes of the first season of this podcast from Stephen Fry—it’s excellent!
It covers the history of communication from the emergence of language to the modern day. At first I was worried that it was going to rehash some of the more questionable ideas in the risible Sapiens, but it turned out to be far more like James Gleick’s The Information or Tom Standage’s The Victorian Internet (two of my favourite books on the history of technology).
There’s no annoying sponsorship interruptions and the whole series feels more like an audiobook than a podcast—an audiobook researched, written and read by Stephen Fry!
Tuesday, November 13th, 2018
Optimise without a face
I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.
On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.
I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.
On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…
- Drag or upload an image into the browser window,
- A facial recognition algorithm finds any faces in the image,
- Those portions of the image remain crisp,
- The rest of the image gets a slight blur,
- Download the optimised image.
Maybe the selecting/blurring part would need canvas? I don’t know.
Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:
- Does this exist yet? And, if not,
- Does anyone want to try building it?
This just blew my mind! A fiendishly clever pattern that allows you to inline resources (like critical CSS) and cache that same content for later retrieval by a service worker.
Monday, November 12th, 2018
A handy in-browser image compression tool. Drag, drop, tweak, and export.
Saturday, November 10th, 2018
Webmentions at Indie Web Camp Berlin
I was in Berlin for most of last week, and every day was packed with activity:
- Indie Web Camp on Saturday and Sunday,
- Accessibility Club on Monday,
- Beyond Tellerrand on Tuesday and Wednesday.
By the time I got back to Brighton, my brain was full …just in time for FF Conf.
All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.
Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.
What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.
In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).
Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.
The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.
Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”
For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.
I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.
As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.
Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.
Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.
We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.
Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:
- By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
- If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.
That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.
Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:
- Does it include a link to a licence?
- Is there a restrictive
- Are there
metadeclarations that say
Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.
Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.
Harry takes a look at the performance implications of loading CSS. To be clear, this is not about the performance of CSS selectors or ordering (which really doesn’t make any difference at this point), but rather it’s about the different ways of getting rid of as much render-blocking CSS as possible.
…a good rule of thumb to remember is that your page will only render as quickly as your slowest stylesheet.
Thursday, November 8th, 2018
Ethan ponders what the web might be like if the kind of legal sticks that exist for accessibility in some countries also existed for performance.