Journal

2681 sparkline

Wednesday, May 13th, 2020

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Monday, May 4th, 2020

A decade apart

Today marks ten years since the publication of HTML5 For Web Designers, the very first book from A Book Apart.

I’m so proud of that book, and so honoured that I was the first author published by the web’s finest purveyors of brief books. I mean, just look at the calibre of their output since my stumbling start!

Here’s what I wrote ten years ago.

Here’s what Jason wrote ten years ago.

Here’s what Mandy wrote ten years ago.

Here’s what Jeffrey wrote ten years ago.

They started something magnificent. Ten years on, with Katel at the helm, it’s going from strength to strength.

Happy birthday, little book! And happy birthday, A Book Apart! Here’s to another decade!

A Book Apart authors, 1-6

Sunday, May 3rd, 2020

Television

What a time, as they say, to be alive. The Situation is awful in so many ways, and yet…

In this crisis, there is also opportunity—the opportunity to sit on the sofa, binge-watch television and feel good about it! I mean just think about it: when in the history of our culture has there been a time when the choice between running a marathon or going to the gym or staying at home watching TV can be resolved with such certitude? Stay at home and watch TV, of course! It’s the only morally correct choice. Protect the NHS! Save lives! Gorge on box sets!

What you end up watching doesn’t really matter. If you want to binge on Love Island or Tiger King, go for it. At this moment in time, it’s all good.

I had an ancient Apple TV device that served me well for years. At the beginning of The Situation, I decided to finally upgrade to a more modern model so I could get to more streaming services. Once I figured out how to turn off the unbelievably annoying sounds and animations, I got it set up with some subscription services. Should it be of any interest, here’s what I’ve been watching in order to save lives and protect the NHS…

Watchmen, Now TV

Superb! I suspect you’ll want to have read Alan Moore’s classic book to fully enjoy this series set in the parallel present extrapolated from that book’s ‘80s setting. Like that book, what appears to be a story about masked vigilantes is packing much, much deeper themes. I have a hunch that if Moore himself were forced to watch it, he might even offer some grudging approval.

Devs, BBC iPlayer

Ex Machina meets The Social Network in Alex Garland’s first TV show. I was reading David Deutsch while I was watching this, which felt like getting an extra bit of world-building. I think this might have worked better in the snappier context of a film, but it makes for an enjoyable saunter as a series. Style outweighs substance, but the style is strong enough to carry it.

Breeders, Now TV

Genuinely hilarious. Watch the first episode and see how many times you laugh guiltily. It gets a bit more sentimental later on, but there’s a wonderfully mean streak throughout that keeps the laughter flowing. If you are a parent of small children though, this may feel like being in a rock band watching Spinal Tap—all too real.

The Mandalorian, Disney Plus

I cannot objectively evaluate this. I absolutely love it, but that’s no surprise. It’s like it was made for me. The execution of each episode is, in my biased opinion, terrific. Read what Nat wrote about it. I agree with everything they said.

Westworld, Now TV

The third series is wrapping up soon. I’m enjoying this series immensely. It’s got a real cyberpunk sensibility; not in a stupid Altered Carbon kind of way, but in a real Gibsonian bit of noirish fun. Like Devs, it’s not as clever as it thinks it is, but it’s throroughly entertaining all the same.

Tales From The Loop, Amazon Prime

The languid pacing means this isn’t exactly a series of cliffhangers, but it will reward you for staying with it. It avoids the negativity of Black Mirror and instead maintains a more neutral viewpoint on the unexpected effects of technology. At its best, it feels like an updated take on Ray Bradbury’s stories of smalltown America (like the episode directed by Jodie Foster featuring a cameo by Shane Carruth—the time traveller’s time traveller).

Years and Years, BBC iPlayer

A near-future family and political drama by Russell T Davies. Subtlety has never been his strong point and the polemic aspects of this are far too on-the-nose to take seriously. Characters will monologue for minutes while practically waving a finger at you out of the television set. But it’s worth watching for Emma Thompson’s performance as an all-too believable populist politician. Apart from a feelgood final episode, it’s not light viewing so maybe not the best quarantine fodder.

For All Mankind, Apple TV+

An ahistorical space race that’s a lot like Mary Robinette Kowal’s Lady Astronaut books. The initial premise—that Alexei Leonov beats Neil Armstrong to a moon landing—is interesting enough, but it really picks up from episode three. Alas, the baton isn’t really kept up for the whole series; it reverts to a more standard kind of drama from about halfway through. Still worth seeing though. It’s probably the best show on Apple TV+, but that says more about the paucity of the selection on there than it does about the quality of this series.

Avenue Five, Now TV

When it’s good, this space-based comedy is chucklesome but it kind of feels like Armando Iannucci lite.

Picard, Amazon Prime

It’s fine. Michael Chabon takes the world of Star Trek in some interesting directions, but it never feels like it’s allowed to veer too far away from the established order.

The Outsider, Now TV

A tense and creepy Stephen King adaption. I enjoyed the mystery of the first few episodes more than the later ones. Once the supernatural rules are established, it’s not quite as interesting. There are some good performances here, but the series gives off a vibe of believing it’s more important than it really is.

Better Call Saul, Netflix

The latest series (four? I’ve lost count) just wrapped up. It’s all good stuff, even knowing how some of the pieces need to slot into place for Breaking Bad.

Normal People, BBC iPlayer

I heard this was good so I went to the BBC iPlayer app and hit play. “Pretty good stuff”, I thought after watching that episode. Then I noticed that it said Episode Twelve. I had watched the final episode first. Doh! But, y’know, watching from the start, the foreknowledge of how things turn out isn’t detracting from the pleasure at all. In fact, I think you could probably watch the whole series completely out of order. It’s more of a tone poem than a plot-driven series. The characters themselves matter more than what happens to them.

Hunters, Amazon Prime

A silly 70s-set jewsploitation series with Al Pacino. The enjoyment comes from the wish fulfillment of killing nazis, which would be fine except for the way that the holocaust is used for character development. The comic-book tone of the show clashes very uncomfortably with that subject matter. The Shoah is not a plot device. This series feels like what we would get if Tarentino made television (and not in a good way).

Thursday, April 30th, 2020

User agents

I was on the podcast A Question Of Code recently. It was fun! The podcast is aimed at people who are making a career change into web development, so it’s right up my alley.

I sometimes get asked about what a new starter should learn. On the podcast, I mentioned a post I wrote a while back with links to some great resources and tutorials. As I said then:

For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).

That’s assuming you want to be a good well-rounded web developer. But it might be that you need to get a job as quickly as possible. In that case, my advice would be very different. I would advise you to learn React.

Believe me, I take no pleasure in giving that advice. But given the reality of what recruiters are looking for, knowing React is going to increase your chances of getting a job (something that’s reflected in the curricula of coding schools). And it’s always possible to work backwards from React to the more fundamental web technologies of HTML, CSS, and JavaScript. I hope.

Regardless of your initial route, what’s the next step? How do you go from starting out in web development to being a top-notch web developer?

I don’t consider myself to be a top-notch web developer (far from it), but I am very fortunate in that I’ve had the opportunity to work alongside some tippety-top-notch developers at ClearleftTrys, Cassie, Danielle, Mark, Graham, Charlotte, Andy, and Natalie.

They—and other top-notch developers I’m fortunate to know—have something in common. They prioritise users. Sure, they’ll all have their favourite technologies and specialised areas, but they don’t lose sight of who they’re building for.

When you think about it, there’s quite a power imbalance between users and developers on the web. Users can—ideally—choose which web browser to use, and maybe make some preference changes if they know where to look, but that’s about it. Developers dictate everything else—the technology that a website will use, the sheer amount of code shipped over the network to the user, whether the site will be built in a fragile or a resilient way. Users are dependent on developers, but developers don’t always act in the best interests of users. It’s a classic example of the principal-agent problem:

The principal–agent problem, in political science and economics (also known as agency dilemma or the agency problem) occurs when one person or entity (the “agent”), is able to make decisions and/or take actions on behalf of, or that impact, another person or entity: the “principal”. This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard.

A top-notch developer never forgets that they are an agent, and that the user is the principal.

But is it realistic to expect web developers to be so focused on user needs? After all, there’s a whole separate field of user experience design that specialises in this focus. It hardly seems practical to suggest that a top-notch developer needs to first become a good UX designer. There’s already plenty to focus on when it comes to just the technology side of front-end development.

So maybe this is too simplistic a way of defining the principle-agent relationship between users and developers:

user :: developer

There’s something that sits in between, mediating that relationship. It’s a piece of software that in the world of web standards is even referred to as a “user agent”: the web browser.

user :: web browser :: developer

So if making the leap to understanding users seems too much of a stretch, there’s an intermediate step. Get to know how web browsers work. As a web developer, if you know what web browsers “like” and “dislike”, you’re well on the way to making great user experiences. If you understand the pain points for browser when they’re parsing and rendering your code, you’ve got a pretty good proxy for understanding the pain points that your users are experiencing.

Tuesday, April 28th, 2020

Modified machete

The Rise Of Skywalker arrives on Disney Plus on the fourth of May (a date often referred to as Star Wars Day, even though May 25th is and always will be the real Star Wars Day). Time to begin a Star Wars movie marathon. But in which order?

Back when there were a mere two trilogies, this was already a vexing problem if someone were watching the films for the first time. You could watch the six films in episode order:

  1. The Phantom Menace
  2. Attack Of The Clones
  3. Revenge Of The Sith
  4. A New Hope
  5. The Empire Strikes Back
  6. The Return Of The Jedi

But then you’re spoiling the grand reveal in episode five.

Alright then, how about release order?

  1. A New Hope
  2. The Empire Strikes Back
  3. Return Of The Jedi
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith

But then you’re front-loading the big pay-off, and you’re finishing with a big set-up.

This conundrum was solved with the machete order. It suggests omitting The Phantom Menace, not because it’s crap, but because nothing happens in it that isn’t covered in the first five minutes of Attack Of The Clones. The machete order is:

  1. A New Hope
  2. The Empire Strikes Back
  3. Attack Of The Clones
  4. Revenge Of The Sith
  5. Return Of The Jedi

It’s kind of brilliant. You get to keep the big reveal in The Empire Strikes Back, and then through flashback, you see how this came to be. Best of all, the pay-off in Return Of The Jedi has even more resonance because you’ve just seen Anakin’s downfall in Revenge Of The Sith.

With the release of the new sequel trilogy, an adjusted machete order is a pretty straightforward way to see the whole saga:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. The Last Jedi
  9. The Rise Of Skywalker

Done. But …what if you want to include the standalone films too?

If you slot them in in release order, they break up the flow:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. Rogue One
  9. The Last Jedi
  10. Solo
  11. The Rise Of Skywalker

I’m planning to watch all eleven films. This was my initial plan:

  1. Rogue One
  2. A New Hope
  3. The Empire Strikes Back
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith
  7. Solo
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

I definitely want to have Rogue One lead straight into A New Hope. The problem is where to put Solo. I don’t want to interrupt the Sith/Jedi setup/payoff.

So here’s my current plan, which I have already begun:

  1. Solo
  2. Rogue One
  3. A New Hope
  4. The Empire Strikes Back
  5. The Phantom Menace
  6. Attack Of The Clones
  7. Revenge Of The Sith
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

This way, the two standalone films work as world-building for the saga and don’t interrupt the flow once the main story is underway.

I think this works pretty well. Neither Solo nor Rogue One require any prior knowledge to be enjoyed.

And just in case you’re thinking that perhaps I’m overthinking it a bit and maybe I’ve got too much time on my hands …the world has too much time on its hands right now! Thanks to The Situation, I can not only take the time to plan and execute the viewing order for a Star Wars movie marathon, I can feel good about it. Stay home, they said. Literally saving lives, they said. Happy to oblige!

Monday, April 27th, 2020

Principles and priorities

I think about design principles a lot. I’m such a nerd for design principles, I even have a collection. I’m not saying all of the design principles in the collection are good—far from it! I collect them without judgement.

As for what makes a good design principle, I’ve written about that before. One aspect that everyone seems to agree on is that a design principle shouldn’t be an obvious truism. Take this as an example:

Make it usable.

Who’s going to disagree with that? It’s so agreeable that it’s practically worthless as a design principle. But now take this statement:

Usability is more important than profitability.

Ooh, now we’re talking! That’s controversial. That’s bound to surface some disagreement, which is a good thing. It’s now passing the reversability test—it’s not hard to imagine an endeavour driven by the opposite:

Profitability is more important than usability.

In either formulation, what makes these statements better than the bland toothless agreeable statements—“Usability is good!”, “Profitability is good!”—is that they introduce the element of prioritisation.

I like design principles that can be formulated as:

X, even over Y.

It’s not saying that Y is unimportant, just that X is more important:

Usability, even over profitability.

Or:

Profitability, even over usability.

Design principles formulated this way help to crystalise priorities. Chris has written about the importance of establishing—and revisiting—priorities on any project:

Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.

I’ve said it many times, but one on my favourite design principles comes from the HTML design principles. The priority of consitituencies (it’s got “priorities” right there in the name!):

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

Or put another way:

  • Users, even over authors.
  • Authors, even over implementors.
  • Implementors, even over specifiers.
  • Specifiers, even over theoretical purity.

When it comes to evaluating technology for the web, I think there are a number of factors at play.

First and foremost, there’s the end user. If a technology choice harms the end user, avoid it. I’m thinking here of the kind of performance tax that a user has to pay when developers choose to use megabytes of JavaScript.

Mind you, some technologies have no direct effect on the end user. When it comes to build tools, version control, toolchains …all the stuff that sits on your computer and never directly interacts with users. In that situation, the wants and needs of developers can absolutely take priority.

But as a general principle, I think this works:

User experience, even over developer experience.

Sadly, I think the current state of “modern” web development reverses that principle. Developer efficiency is prized above all else. Like I said, that would be absolutely fine if we’re talking about technologies that only developers are exposed to, but as soon as we’re talking about shipping those technologies over the network to end users, it’s negligent to continue to prioritise the developer experience.

I feel like personal websites are an exception here. What you do on your own website is completely up to you. But once you’re taking a paycheck to make websites that will be used by other people, it’s incumbent on you to realise that it’s not about you.

I’ve been talking about developers here, but this is something that applies just as much to designers. But I feel like designers go through that priority shift fairly early in their career. At the outset, they’re eager to make their mark and prove themselves. As they grow and realise that it’s not about them, they understand that the most appropriate solution for the user is what matters, even if that’s a “boring” tried-and-tested pattern that isn’t going to wow any fellow designers.

I’d like to think that developers would follow a similar progression, and I’m sure that some do. But I’ve seen many senior developers who have grown more enamoured with technologies instead of honing in on the most appropriate technology for end users. Maybe that’s because in many organisations, developers are positioned further away from the end users (whereas designers are ideally being confronted with their creations being used by actual people). If a lead developer is focused on the productivity, efficiency, and happiness of the dev team, it’s no wonder that their priorities end up overtaking the user experience.

I realise I’m talking in very binary terms here: developer experience versus user experience. I know it’s not always that simple. Other priorities also come into play, like business needs. Sometimes business needs are in direct conflict with user needs. If an online business makes its money through invasive tracking and surveillance, then there’s no point in having a design principle that claims to prioritise user needs above all else. That would be a hollow claim, and the design principle would become worthless.

Because that’s the point with design principles. They’re there to be used. They’re not a nice fluffy exercise in feeling good about your work. The priority of constituencies begins, “in case of conflict” and that’s exactly when a design principle matters—when it’s tested.

Suppose someone with a lot of clout in your organisation makes a decision, but that decision conflicts with your organisations’s design principles. Instead of having an opinion-based argument about who’s right or wrong, the previously agreed-upon design principles allow you to take ego out of the equation.

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Saturday, April 25th, 2020

Reading

At the beginning of the year, Remy wrote about extracting Goodreads metadata so he could create his end-of-year reading list. More recently, Mark Llobrera wrote about how he created a visualisation of his reading history. In his case, he’s using JSON to store the information.

This kind of JSON storage is exactly what Tom Critchlow proposes in his post, Library JSON - A Proposal for a Decentralized Goodreads:

Thinking through building some kind of “web of books” I realized that we could use something similar to RSS to build a kind of decentralized GoodReads powered by indie sites and an underlying easy to parse format.

His proposal looks kind of similar to what Mark came up with. There’s a title, an author, an image, and some kind of date for when you started and/or finished reading the book.

Matt then points out that RSS gets close to the data format being suggested and asks how about using RSS?:

Rather than inventing a new format, my suggestion is that this is RSS plus an extension to deal with books. This is analogous to how the podcast feeds are specified: they are RSS plus custom tags.

Like Matt, I’m in favour of re-using existing wheels rather than inventing new ones, mostly to avoid a 927 situation.

But all of these proposals—whether JSON or RSS—involve the creation of a separate file, and yet the information is originally published in HTML. Along the lines of Matt’s idea, I could imagine extending the h-entry collection of class names to allow for books (or films, or other media). It already handles images (with u-photo). I think the missing fields are the date-related ones: when you start and finish reading. Those fields are present in a different microformat, h-event in the form of dt-start and dt-end. Maybe they could be combined:


<article class="h-entry h-event h-review">
<h1 class="p-name p-item">Book title</h1>
<img class="u-photo" src="image.jpg" alt="Book cover.">
<p class="p-summary h-card">Book author</p>
<time class="dt-start" datetime="YYYY-MM-DD">Start date</time>
<time class="dt-end" datetime="YYYY-MM-DD">End date</time>
<div class="e-content">Remarks</div>
<data class="p-rating" value="5">★★★★★</data>
<time class="dt-published" datetime="YYYY-MM-DDThh:mm">Date of this post</time>
</article>

That markup is simultaneously a post (h-entry) and an event (h-event) and you can even throw in h-card for the book author (as well as h-review if you like to rate the books you read). It can be converted to RSS and also converted to .ics for calendars—those parsers are already out there. It’s ready for aggregation and it’s ready for visualisation.

I publish very minimal reading posts here on adactio.com. What little data is there isn’t very structured—I don’t even separate the book title from the author. But maybe I’ll have a little play around with turning these h-entries into combined h-entry/event posts.

Thursday, April 23rd, 2020

Lightweight

It’s been fascinating to see how television programmes have adapted to The Situation. It’s like there’s been a weird inversion with the YouTube asthetic. Instead of YouTubers doing their utmost to emulate the look of professional television, now everyone on professional television looks like a YouTuber.

No more lighting or audio technicians. No more studio audiences. Heck, no more studios.

There are some kinds of TV programmes that are showing the strain. A lot of comedy formats just fall flat without the usual production values. But a lot of programmes work just fine. In fact, some of them might be better. Watching Mary Beard present Front Row Late from her house is an absolute delight. It feels more direct and honest without the artiface of a television studio. It kind of makes you wonder whether expensive production costs are really necessary when what you really care about is the content.

All of this is one big belaboured metaphor for websites.

In times of crisis, informational websites sometimes offer a “lite” version. Max has even made an emergency website kit:

The site contains only the bare minimum - no webfonts, no tracking, no unnecessary images. The entire thing should fit in a single HTTP request. It’s basically just a small, ultra-lean blog focused on maximum resilience and accessibility. The Service Worker takes it a step further from there so if you’ve visited the site once, the information is still accessible even if you lose network coverage.

Eric emphasises the importance of performance in his post Get Static:

I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me.  As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Tom Loosemore offers this advice to teams building new coronavirus services:

  1. Get a 4 year-old Android phone, and use it as your test/demo device.
  2. https://design-system.service.gov.uk is your friend.
  3. Full React isn’t your friend if it makes your service slow & inaccessible

Remember: This is for everyone.

Indeed, Gov.uk are usually a paragon of best practices in just about any situation. But they dropped the ball recently, as Matthew attests:

coronavirus.data.gov.uk is a static site, fetching and displaying remote data. It is also a 100% client-side JavaScript React site.

http://dracos.co.uk/made/coronavirus.data.gov.uk/ is 238K vs 770K (basics) on load. I’ve removed about 550K of JavaScript. It seems to work the same.

As Tom says:

One sign that your website isn’t meeting the needs of all your users is when Matthew Somerville gets sufficiently grumpy about it to do a proper version himself.

It’s true enough that Matthew excels at creating lightweight, accessible versions of services that are too bloated or buggy to use. His accessible Odeon project from back in the day is legendary. And I use his slimline version of the National Rail website all the time: traintimes.org.uk—it’s a terrificly performant progressive web app.

It’s thankless work though. It flies in the face of everything considered “modern” web development. (If you want to know the cost of “modern” framework-driven JavaScript-first web development, Tim has the numbers.) But Matthew is kind of a hero to me. I wish more developers would follow his example.

Maybe now, with this rush to make lightweight versions of valuable services, we might stop and reflect on whether we ever really needed all those added extras in the first place.

Hope springs eternal.

Update: Matthew has written about his process in Looking at coronavirus.data.gov.uk.

Monday, April 20th, 2020

Overlay gap

I think a lot about Danielle’s talk at Patterns Day last year.

Around about the six minute mark she starts talking about gaps and overlaps.

Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.

Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.

This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).

When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:

Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.

I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.

Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:

There are two things common between the websites in these screenshots that I took yesterday.

  1. They are beautifully designed, with great typography, clear branding, all optimized for readability.
  2. I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.

The web can be beautiful. Except it’s not right now.

How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?

PM/Legal/Marketing made me do it

I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.

It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.

I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.

Because the web can’t survive like this.

Friday, April 17th, 2020

Future Sync 2020

I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.

The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.

The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).

As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.

Here are those questions and answers…

The first few questions were about last years’s CERN project, which opens the talk:

Based on what you now know from the CERN 2019 WorldWideWeb Rebuild project—what would you have done differently if you had been part of the original 1989 Team?

I responded:

Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.

The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:

https://adactio.com/journal/16099

Next question:

How excited were you when you initially got the call for such an amazing project?

My predictable response:

It was an unbelievable privilege! I was so excited the whole time—I still can hardly believe it really happened!

https://adactio.com/journal/14803

https://adactio.com/journal/14821

Later in the presentation, I talked about service workers and progressive web apps. I got a technical question about that:

Is there a limit to the amount of local storage a PWA can use?

I answered:

Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.

But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.

More worrying is the announcement from Apple to only store files for a week of browser use:

https://adactio.com/journal/16619

Finally, there was a question about the over-arching theme of the talk…

Great talk, Jeremy. Do you encounter push-back when using the term “Progressive Enhancement”?

My response:

Yes! …And that’s why I never once used the phrase “progressive enhancement” in my talk. 🙂

There’s a lot of misunderstanding of the term. Rather than correct it, I now avoid it:

https://adactio.com/journal/9195

Instead of using the phrase “progressive enhancement”, I now talk about the benefits and effects of the technique: resilience, universality, etc.

Future Sync Distributed 2020