Tags: for

923

sparkline

Wednesday, January 16th, 2019

Signal v Noise exits Medium – Signal v. Noise

Traditional blogs might have swung out of favor, as we all discovered the benefits of social media and aggregating platforms, but we think they’re about to swing back in style, as we all discover the real costs and problems brought by such centralization.

Friday, January 11th, 2019

It’s What You Make, Not How You Make It.

How did I miss this great post from 2016 by one of my favourite people‽ It’s even more more relevant today.

To me it doesn’t matter whether you write your HTML and CSS by hand or use JavaScript to generate it for you. What matters is the output, how it is structured, and how it is served to the client. When we allow our tools to take precedent over the quality of our output the entire web suffers. Sites are likely to be less accessible, less performant, and suffer from poor semantics.

Four Cool URLs - Alex Pounds’ Blog

A fellow URL fetishest!

I love me a well-designed URL scheme—here’s four interesting approaches.

URLs are consumed by machines, but they should be designed for humans. If your URL thinking stops at “uniquely identifies a page” and “good for SEO”, you’re missing out.

Wednesday, January 9th, 2019

The Ethics of Performance - TimKadlec.com

When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.

Monday, January 7th, 2019

CSS-only multiple choice quizzing - Matthew Somerville

In which Matthew disects a multiple choice quiz that uses CSS to do some clever logic, using the :checked pseudo-class and counter-increment.

Oh, and this is how he realised it wasn’t using JavaScript:

I have JavaScript disabled on my phone because a) it cuts out most of the ads, b) it cuts out lots of bandwidth and I have a limited data plan, and c) my battery lasts longer because it’s not processing tons of code to show me some text (cough, Medium).

Resolution

The start of a new year is the traditional time for making resolutions. I’ve done it in the past. Now I’m not sure it’s such a good idea.

Think about it. It’s January. The middle of winter. It’s cold outside. The days are short. The only seasonal foods available are root vegetables and brassicas. Considering this lack of sunlight and fruit, it seems inadvisable to try to also deny yourself the intake of sugar, alcohol, meat, carbohydrates or gluten. You’re playing with a stacked deck. And then when inevitably, in the depths of winter, you cave in and pour yourself a glass of wine or indulge in a piece of cake, you now have the added weight of guilt on your shoulders to carry through the neverending winter nights.

Of course not all resolutions involve the abnegation of material pleasures. Many a new year’s promise involves a renewed commitment to work, exercise, or culture-vulching. But again, is this really the best time of year to do that? Given the weather, are you really in the best frame of mind to tackle such a tall order?

No, I don’t think I’ll be making any new year’s resolutions. If anything, this is the time of year when I won’t feel bad about having a pint of ale or a comforting stew. It’s also the time of year when I’m going to cut myself more slack if I’m not exercising diligently or working hard. Let’s face it, just making it through these months intact should be achievement enough.

If I were to make a resolution, it would only be that, come summertime, I’ll take stock and maybe make a commitment to cut down on some guilty pleasure or increase some noble activity then. A midsummer’s resolution, if you will.

Until then, I’ll be cosying up and indulging in any bodily comforts I crave. My resolve to do that is strong.

Friday, December 28th, 2018

Malicious AI Report

Well, this an interesting format experiment—the latest Black Mirror just dropped, and it’s a PDF.

Wednesday, December 19th, 2018

Performance Calendar » All about prefetching

A good roundup of techniques for responsible prefetching from Katie Hempenius.

ANDI - Accessibility Testing Tool - Install

Another bookmarklet for checking accessibility—kind of like tota11y—that allows to preview how screen readers will handle images, focusable elements, and more.

Friday, December 14th, 2018

GitHub - GoogleChromeLabs/quicklink: ⚡️Faster subsequent page-loads by prefetching in-viewport links during idle time

This looks like a very handle little performance-enhancing script: it attempts to prefetch some links, but in a responsible way. It won’t do any prefetching on slow connections or where data saving is enabled, and it only prefetches when the browser is idle.

Monday, December 3rd, 2018

Voxxed Thessaloniki 2018 - Opening Keynote - Taking Back The Web - YouTube

Here’s the talk I gave recently about indie web building blocks.

There’s fifteen minutes of Q&A starting around the 35 minute mark. People asked some great questions!

Monday, November 26th, 2018

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.

Thursday, November 22nd, 2018

Great Leap Years - Official site of Stephen Fry

I just binge-listened to the six episodes of the first season of this podcast from Stephen Fry—it’s excellent!

It covers the history of communication from the emergence of language to the modern day. At first I was worried that it was going to rehash some of the more questionable ideas in the risible Sapiens, but it turned out to be far more like James Gleick’s The Information or Tom Standage’s The Victorian Internet (two of my favourite books on the history of technology).

There’s no annoying sponsorship interruptions and the whole series feels more like an audiobook than a podcast—an audiobook researched, written and read by Stephen Fry!

Tuesday, November 13th, 2018

Optimise without a face

I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.

On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.

I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.

On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…

When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:

  • Drag or upload an image into the browser window,
  • A facial recognition algorithm finds any faces in the image,
  • Those portions of the image remain crisp,
  • The rest of the image gets a slight blur,
  • Download the optimised image.

Maybe the selecting/blurring part would need canvas? I don’t know.

Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:

  1. Does this exist yet? And, if not,
  2. Does anyone want to try building it?

Inlining or Caching? Both Please! | Filament Group, Inc., Boston, MA

This just blew my mind! A fiendishly clever pattern that allows you to inline resources (like critical CSS) and cache that same content for later retrieval by a service worker.

Crazy clever!

Monday, November 12th, 2018

Squoosh

A handy in-browser image compression tool. Drag, drop, tweak, and export.

Home  |  web.dev

I guess this domain name is why our local developmemnt environments stopped working.

Anyway, it’s a web interface onto Lighthouse (note that it has the same bugs as the version of Lighthouse in Chrome). Kind of like webhint.io.

Saturday, November 10th, 2018

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

CSS and Network Performance – CSS Wizardry

Harry takes a look at the performance implications of loading CSS. To be clear, this is not about the performance of CSS selectors or ordering (which really doesn’t make any difference at this point), but rather it’s about the different ways of getting rid of as much render-blocking CSS as possible.

…a good rule of thumb to remember is that your page will only render as quickly as your slowest stylesheet.

Thursday, November 8th, 2018

Designing, laws, and attitudes. — Ethan Marcotte

Ethan ponders what the web might be like if the kind of legal sticks that exist for accessibility in some countries also existed for performance.