Tuesday, May 7th, 2019
Tuesday, April 30th, 2019
Jason describes the next big thing in web typography: streaming fonts!
…to enable the ability for only the required part of the font be downloaded on any given page, and for subsequent requests for that font to dynamically ‘patch’ the original download with additional sets of glyphs as required on successive page views—even if they occur on separate sites.
Friday, January 11th, 2019
A fellow URL fetishest!
I love me a well-designed URL scheme—here’s four interesting approaches.
URLs are consumed by machines, but they should be designed for humans. If your URL thinking stops at “uniquely identifies a page” and “good for SEO”, you’re missing out.
Monday, December 3rd, 2018
Here’s the talk I gave recently about indie web building blocks.
There’s fifteen minutes of Q&A starting around the 35 minute mark. People asked some great questions!
Tuesday, November 13th, 2018
Optimise without a face
I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.
On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.
I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.
On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…
- Drag or upload an image into the browser window,
- A facial recognition algorithm finds any faces in the image,
- Those portions of the image remain crisp,
- The rest of the image gets a slight blur,
- Download the optimised image.
Maybe the selecting/blurring part would need canvas? I don’t know.
Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:
- Does this exist yet? And, if not,
- Does anyone want to try building it?
Monday, November 12th, 2018
A handy in-browser image compression tool. Drag, drop, tweak, and export.
Saturday, November 10th, 2018
Webmentions at Indie Web Camp Berlin
I was in Berlin for most of last week, and every day was packed with activity:
- Indie Web Camp on Saturday and Sunday,
- Accessibility Club on Monday,
- Beyond Tellerrand on Tuesday and Wednesday.
By the time I got back to Brighton, my brain was full …just in time for FF Conf.
All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.
Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.
What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.
In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).
Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.
The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.
Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”
For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.
I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.
As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.
Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.
Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.
We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.
Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:
- By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
- If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.
That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.
Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:
- Does it include a link to a licence?
- Is there a restrictive
- Are there
metadeclarations that say
Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.
Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.
Saturday, November 3rd, 2018
Day one of Indie Web Camp Berlin is done, and it was great! Here’s Charlie’s recap of the sessions she attended.
Monday, August 20th, 2018
A good half-hour presentation by Stephen Rushe on the building blocks of the indie web. You can watch the video or look through the slides.
I’ve recently been exploring the world of the IndieWeb, and owning my own content rather than being reliant on the continued existence of “silos” to maintain it. This has led me to discover the varied eco-system of IndieWeb, such as IndieAuth, Microformats, Micropub, Webmentions, Microsub, POSSE, and PESOS.
Tuesday, July 31st, 2018
Aw, this is so nice! Chris points to the way that The Session generates sheet music from abc text:
Here’s another way of thinking of it: I was contacted by a blind user of The Session who hadn’t come across abc notation before. Once they realised how it worked, they said it was like having
alt text for sheet music! 🤯
Sunday, July 1st, 2018
A nice little tutorial from Aaron.
Friday, March 30th, 2018
A run-down of digital preservation technologies for very, very long-term storage …in space.
Tuesday, March 27th, 2018
Tim explains why that neat trick of making a really big JPEG with quality set to 0% is no longer necessary, and how the savings you make in bandwidth with that technique are nullified by the expense of the memory footprint needed.
Saturday, February 24th, 2018
Luke Stevens is trying to get untangle the very mixed signals being sent from different parts of Google around AMP’s goals. The response he got—before getting shut down—is very telling in its hubris and arrogance.
I believe the people working on the AMP format are well-intentioned, but I also believe they have conflated the best interests of Google with the best interests of the web.
Wednesday, February 14th, 2018
AMP pages aren’t fast because of the AMP format. AMP pages are fast when you visit one via Google search …because of Google’s monopoly on preloading:
Technically, a clever trick. It’s hard to argue with that. Yet I consider it cheating and anti competitive behavior.
Preloading is exclusive to AMP. Google does not preload non-AMP pages. If Google would have a genuine interest in speeding up the whole web on mobile, it could simply preload resources of non-AMP pages as well. Not doing this is a strong hint that another agenda is at work, to say the least.
As of this moment, the power dynamics are skewed pretty severely in favor of Google’s proprietary AMP standard, and against those of us who’d ask this question:
What can I do about AMP?
So, to recap, the web community has stated over and over again that we’re not comfortable with Google incentivizing the use of AMP with search engine carrots. In response, Google has provided yet another search engine carrot for AMP.
This wouldn’t bother me if AMP was open about what it is: a tool for folks to optimize their search engine placement. But of course, that’s not the claim. The claim is that AMP is “for the open web.”
Spot on, Tim. Spot on.
If AMP is truly for the open web, de-couple it from Google search entirely. It has no business there.
Look, AMP, you’re either a tool for the open web, or you’re a tool for Google search. I don’t mind if you’re the latter, but please stop pretending you’re something else.
Friday, February 9th, 2018
Tuesday, January 9th, 2018
I signed this open letter.
We are a community of individuals who have a significant interest in the development and health of the World Wide Web (“the Web”), and we are deeply concerned about Accelerated Mobile Pages (“AMP”), a Google project that purportedly seeks to improve the user experience of the Web.
Saturday, November 18th, 2017
Here’s a clever to technique to improve the perceived performance of image loading with a polygonal SVG placeholder.