Journal tags: story

98

sparkline

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

The past is a foreign country

I tried watching a classic Western this weekend, How The West Was Won. I did not make it far. Let’s just say that in the first few minutes, the Spencer Tracy voiceover that accompanies the sweeping vistas sets out an attitude toward the indigenous population that would not fly today.

It’s one thing to be repulsed by a film from another era, but it’s even more uncomfortable to revisit the films from your own teenage years.

Tim Carmody has written about the real hero of Top Gun:

Iceman’s concern for Maverick and the safety of his fighter unit is totally understandable. He tries, however awkwardly, to discuss Goose’s death with Maverick. There’s no discussion of blame. And when they’re assigned to fly into combat together, Iceman briefly and discreetly raises the issue of Maverick’s fitness to fly with his superior officer and withdraws his concern once a decision is made.

I know someone who didn’t watch Ferris Bueller’s Day Off until they were well into adulthood. Their sympathies lay squarely with Dean Rooney.

And I think we can all agree in hindsight that Walter Peck was completely correct in his assessment of the dangers in Ghostbusters.

Oh, and The Karate Kid was the real bully.

This week, George wrote I’ve fallen out of love with Indiana Jones. Indy’s attitude of “it belongs in a museum” is the same worldview that got the Parthenon Marbles into the British Museum (instead of, y’know, the Parthenon where they belong).

Adrian Hon invites us to imagine what it would be like if the tables were turned. He wrote a short piece of speculative fiction called The Taking of Stonehenge:

We selected these archaeological sites based on their importance to our collective understanding of human and galactic history, and their immediate risk of irreparable harm from pollution, climate change, neglect, and looting. We are sympathetic to claims that preserving these sites in their “original” context is important, but our duty of care outweighs such emotional considerations.

Work ethics

If you’re travelling around Ireland, you may come across some odd pieces of 19th century architecture—walls, bridges, buildings and roads that serve no purpose. They date back to The Great Hunger of the 1840s. These “famine follies” were the result of a public works scheme.

The thinking went something like this: people are starving so we should feed them but we can’t just give people food for nothing so let’s make people do pointless work in exchange for feeding them (kind of like an early iteration of proof of work for cryptobollocks on blockchains …except with a blockchain, you don’t even get a wall or a road, just ridiculous amounts of wasted energy).

This kind of thinking seems reprehensible from today’s perspective. But I still see its echo in the work ethic espoused by otherwise smart people.

Here’s the thing: there’s good work and there’s working hard. What matters is doing good work. Often, to do good work you need to work hard. And so people naturally conflate the two, thinking that what matters is working hard. But whether you work hard or not isn’t actually what’s important. What’s important is that you do good work.

If you can do good work without working hard, that’s not a bad thing. In fact, it’s great—you’ve managed to do good work and do it efficiently! But often this very efficiency is treated as laziness.

Sensible managers are rightly appalled by so-called productivity tracking because it measures exactly the wrong thing. Those instruments of workplace surveillance measure inputs, not outputs (and even measuring outputs is misguided when what really matters are outcomes).

They can attempt to measure how hard someone is working, but they don’t even attempt to measure whether someone is producing good work. If anything, they actively discourage good work; there’s plenty of evidence to show that more hours equates to less quality.

I used to think that must be some validity to the belief that hard work has intrinsic value. It was a position that was espoused so often by those around me that it seemed a truism.

But after a few decades of experience, I see no evidence for hard work as an intrinsically valuable activity, much less a useful measurement. If anything, I’ve seen the real harm that can be caused by tying your self-worth to how much you’re working. That way lies burnout.

We no longer make people build famine walls or famine roads. But I wonder how many of us are constructing little monuments in our inboxes and calendars, filling those spaces with work to be done in an attempt to chase the rewards we’ve been told will result from hard graft.

I’d rather spend my time pursuing the opposite: the least work for the most people.

Speaking at CSS Day 2022

I’m very excited about speaking at CSS Day this year. My talk is called In And Out Of Style:

It’s an exciting time for CSS! It feels like new features are being added every day. And yet, through it all, CSS has managed to remain an accessible language for anyone making websites. Is this an inevitable part of the design of CSS? Or has CSS been formed by chance? Let’s take a look at the history—and some alternative histories—of the World Wide Web to better understand where we are today. And then, let’s cast our gaze to the future!

Technically, CSS Day won’t be the first outing for this talk but it will be the in-person debut. I had the chance to give the talk online last week at An Event Apart. Giving a talk online isn’t quite the same as speaking on stage, but I got enough feedback from the attendees that I’m feeling confident about giving the talk in Amsterdam. It went down well with the audience at An Event Apart.

If the description has you intrigued, come along to CSS Day to hear the talk in person. And if you like the subject matter, I’ve put together these links to go with the talk…

Blog posts

Presentations

Proposals (email)

Papers (PDF)

People (Wikipedia)

Ten episodes of the Web History podcast

For over a year now I’ve been recording the audio versions of Jay Hoffman’s excellent Web History series on CSS Tricks.

We’re up to ten chapters now. The audio version of each chapter is between 30 and 40 minutes long. That’s around 400 minutes of my voice: a good six and a half hours of me narrating the history of the web. That’s like an audio book!

The story so far covers:

  1. Birth
  2. Browsers
  3. The Website
  4. Search
  5. Publishing
  6. Web Design
  7. Standards
  8. CSS
  9. Community
  10. Browser Wars

…with more to come.

The audio is available as a podcast. You can subscribe to the RSS feed. It’s also available from Apple and Spotify.

That’ll give you plenty to listen to while you wait for the next season of the Clearleft podcast.

Publishing The State Of The Web

Back in April I gave a talk at An Event Apart Spring Summit. The talk was called The State Of The Web, and I’ve published the transcript. I’ve also published the video.

I put a lot of work into this talk and I think it paid off. I wrote about preparing the talk. I also wrote about recording it. I also published links related to the talk. It was an intense process, but a rewarding one.

I almost called the talk The Overview Effect. My main goal with the talk was to instil a sense of perspective. Hence the references to the famous Earthrise photograph.

On the one hand, I wanted the audience to grasp just how far the web has come. So the first half of the talk is a bit of a trip down memory lane, with a constant return to just how much we can accomplish in browsers today. It’s all very positive and upbeat.

Then I twist the knife. Having shown just how far we’ve progressed technically, I switch gears the moment I say:

The biggest challenges facing the World Wide Web today are not technical challenges.

Then I dive into those challenges, as I see them. It turns out that technical challenges would be preferable to the seemingly intractable problems of today’s web.

I almost wish I could’ve flipped the order: talk about the negative stuff first but then finish with the positive. I worry that the talk could be a bit of a downer. Still, I tried to finish on an optimistic note.

I won’t spoil it any more for you. Watch the video or have a read of The State Of The Web.

Stakeholders of styling

When I wrote about the new accent-color property in CSS, I pondered how much control a web developer should have over styling form controls:

Who are we to make that decision? Shouldn’t the user’s choice take primacy over our choices?

But then again, where do we draw the line? We’re allowed over-ride link colours. We’re allowed over-ride font choices.

Ultimately, I came down on the side of granting authors more control:

If developers don’t get a standardised way to customise native form controls, they’ll just recreate their own over-engineered versions.

This question of “who gets to decide?” used to be much more prevelant in the early days of the web. One way to think about this is that there are three stakeholders involved in the presentation of a web page:

  1. The author of the page. “Author” is spec-speak for designer or developer.
  2. The user.
  3. The browser, or user agent. A piece of software tries to balance the needs of both author and user. But, as the name implies, the user takes precedence.

These days we tend to think of web design a single-stakeholder undertaking. The author decides how something should be presented and then executes that decision using CSS.

But as Eric once said, every line of you CSS you write is a suggestion to the browser. That’s not how we think about CSS though. We think of CSS like a series of instructions rather than suggestions. Never mind respecting the user’s preferences; one of the first things we do is reset all the user agent’s styles.

In the early days of the web, more consideration was given to the idea of style suggestions rather than instructions. Heck, users could always over-ride any of your suggestions with their own user stylesheet. These days, users would need to install a browser extension to do the same thing.

The first proposal for CSS had a concept called “influence”:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

I think the only remnant of “influence” left in CSS is accidental. It’s in the specificity of selectors …and the !important declaration.

I think it’s a shame that user stylesheets are no longer a thing. But I get why they were dropped from browsers. They date from a time when it was mostly nerds using the web, before “regular folks” came on board. I understand why it became a little-used feature, suitable for being dropped. But the principle of it still rankles slightly.

But in recent years there has been a slight return to the multi-stakeholder concept of styling websites. Thanks to prefers-reduced-motion and prefers-color-scheme, a responsible author can choose to bow to the wishes of the user.

I was reminded of this when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Broad Band

I like to alternate between reading fiction and non-fiction. The fiction is often of the science variety. Actually, so is the non-fiction.

There was a non-fiction book I had queued up for a while and I finally got around to reading. Broad Band by Claire L. Evans. Now I’m kicking myself that I didn’t read it earlier. I think I might’ve been remembering how I found Mar Hicks’s Programmed Inequality to be a bit of a slog—a fascinating topic, but written in a fairly academic style. Broad Band covers some similar ground, but wow, is the writing style in a class of its own!

This book is pretty much the perfect mix. The topic is completely compelling—a history of women in computing. The stories are rivetting—even when I thought I knew the history, this showed me how little I knew. And the voice of the book is pure poetry.

It’s not often that I read a book that I recommend wholeheartedly to everyone. I prefer to tailor my recommendations to individual situations. But in the case of Broad Band, I honesty think that anyone would enjoy it.

I absolutely loved it. So did Cory Doctorow:

Because she is a brilliant and lyrical writer she brings these women to life, turns them into fully formed characters, makes you see and feel their life stories, frustrations and triumphs.

Even the most celebrated women of tech history – Ada Lovelace, Grace Hopper – leap off the page as people, not merely historical personages or pioneers. Again, these are stories I thought I knew, and realized I didn’t.

Yes! That!

Read it for yourself and see what you think.

Of the web

I’m subscribed to a lot of blogs in my RSS reader. I follow some people because what they write about is very different to what I know about. But I also follow lots of people who have similar interests and ideas to me. So I’m not exactly in an echo chamber, but I do have the reverb turned up pretty high.

Sometimes these people post thoughts that are eerily similar to what I’ve been thinking about. Ethan has been known to do this. Get out of my head, Marcotte!

But even if Ethan wasn’t some sort of telepath, he’d still be in my RSS reader. We’re friends. Lots of the people in my RSS reader are my friends. When I read their words, I can hear their voices.

Then there are the people I’ve never met. Like Desirée García, Piper Haywood, or Jim Nielsen. Never met them, don’t know them, but damn, do I enjoy reading their blogs. Last year alone, I ended up linking to Jim’s posts ten different times.

Or Baldur Bjarnason. I can’t remember when I first came across his writing, but it really, really resonates with me. I probably owe him royalties for the amount of times I’ve cited his post Over-engineering is under-engineering.

His latest post is postively Marcottian in how it exposes what’s been fermenting in my own mind. But because he writes clearly, it really helps clarify my own thinking. It’s often been said that you should write to figure out what you think, and I can absolutely relate to that. But here’s a case where somebody else’s writing really helps to solidify my own thoughts.

Which type of novelty-seeking web developer are you?

It starts with some existentialist stock-taking. I can relate, what with the whole five decades thing. But then it turns the existential questioning to the World Wide Web itself, or rather, the people building the web.

In a way, it’s like taking the question of the great divide (front of the front end and back of the front end), and then turning it 45 degrees to reveal an entirely hidden dimension.

In examining the nature of the web, he hits on the litmus of how you view encapsulation:

I mention this first as it’s the aspect of the web that modern web developers hate the most without even giving it a label. Single-Page-Apps and GraphQL are both efforts to eradicate the encapsulation that’s baked into the foundation of every layer of the web.

Most modern devs are trying to get rid of it but it’s one of the web’s most strategic advantages.

I hadn’t thought of this before.

By default, if you don’t go against the grain of the web, each HTTP endpoint is encapsulated from each other.

Moreover, all of this can happen really fast if you aren’t going overboard with your CSS and JS.

He finishes with a look at another of the web’s most powerful features: distribution. In between are the things that make the web webby: hypertext and flexibility (The Dao of the Web).

It’s the idea that the web isn’t a single fixed thing but a fluid multitude whose shape is dictated by its surroundings.

This resonates with me because it highlights two different ways of viewing the web.

On the one hand, you can see the web purely as a distribution channel. In the past you might have been distributing a Flash movie. These days you might be distributing a single page app. Either way, the web is there as a low-friction way of getting your creation in front of other people.

The other way of building for the web is to go with the web’s grain, embracing flexibility and playing to the strengths of the medium through progressive enhancement. This is the distinction I was getting at when I talked about something being not just on the web, but of the web.

With that mindset, Baldur then takes us through some of the technologies that he’s excited about, like SvelteKit and Hotwire. I think it’s the same mindset that got me excited about service workers. As Baldur says:

They are helping the web become better at being its own thing.

That’s my tagline right there.

Reading resonances

In today’s world of algorithmic recommendation engines, it’s nice to experience some serendipity every now and then. I remember how nice it was when two books I read in sequence had a wonderful echo in their descriptions of fermentation:

There’s a lovely resonance in reading @RobinSloan’s Sourdough back to back with @EdYong209’s I Contain Multitudes. One’s fiction, one’s non-fiction, but they’re both microbepunk.

Robin agreed:

OMG I’m so glad these books presented themselves to you together—I think it’s a great pairing, too. And certainly, some of Ed’s writing about microbes was in my head as I was writing the novel!

I experienced another resonant echo when I finished reading Rebecca Solnit’s A Paradise Built in Hell and then starting reading Rutger Bregman’s Humankind. Both books share a common theme—that human beings are fundamentally decent—but the first chapter of Humankind was mentioning the exact same events that are chronicled in A Paradise Built in Hell; the Blitz, September 11th, Katrina, and more. Then he cites from that book directly. The two books were published a decade apart, and it was just happenstance that I ended up reading them in quick succession.

I recommend both books. Humankind is thoroughly enjoyable, but it has one maddeningly frustrating flaw. A Paradise Built in Hell isn’t the only work that influenced Bregman—he also cites Yuval Noah Harari’s Sapiens. Here’s what I thought of Sapiens:

Yuval Noah Harari has fixated on some ideas that make a mess of the narrative arc of Sapiens. In particular, he believes that the agricultural revolution was, as he describes it, “history’s biggest fraud.” In the absence of any recorded evidence for this, he instead provides idyllic descriptions of the hunter-gatherer lifestyle that have as much foundation in reality as the paleo diet.

Humankind echoes this fabrication. Again, the giveaway is that the footnotes dry up when the author is describing the idyllic pre-historical nomadic lifestyle. Compare it with, for instance, this description of the founding of Jericho—possibly the world’s oldest city—where researchers are at pains to point out that we can’t possibly know what life was like before written records.

I worry that Yuval Noah Harari’s imaginings are being treated as “truthy” by Rutger Bregman. It’s not a trend I like.

Still, apart from that annoying detour, Humankind is a great read. So is A Paradise Built in Hell. Try them together.

The Correct Material

I’ve been watching The Right Stuff on Disney Plus. It’s a modern remake of the ’80s film of the ’70s Tom Wolfe book of ’60s events.

It’s okay. The main challenge, as a viewer, is keeping track of which of the seven homogenous white guys is which. It’s like Merry, Pippin, Ant, Dec, and then some.

It’s kind of fun watching it after watching For All Mankind which has some of the same characters following a different counterfactual history.

The story being told is interesting enough (although Tom has pointed out that removing the Chuck Yeager angle really diminishes the narrative). But ultimately the tension is manufactured around a single event—the launch of Freedom 7—that was very much in the shadow of Gargarin’s historic Vostok 1 flight.

There are juicier stories to be told, but those stories come from Russia.

Some of these stories have been told in film. The Spacewalker told the amazing story of Alexei Leonov’s mission, though it messes with the truth about what happened with the landing and recovery—a real shame, considering that the true story is remarkable enough.

Imagine an alternative to The Right Stuff that relayed the drama of Soyuz 1—it’s got everything: friendship, rivalries, politics, tragedy…

I’d watch the heck out of that.

The Web History podcast

From day one, I’ve been a fan of Jay Hoffman’s project The History Of The Web—both the newsletter and the evolving timeline.

Recently Jay started publishing essays on web history over on CSS Tricks:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Round about that time, Chris floated the idea of having people record themselves reading blog posts. I immediately volunteered my services for the web history essays.

So now you can listen to me reading Jay’s words:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Each chapter is round about half an hour long so that’s a solid two hours or so of me yapping.

Should you wish to take the audio with you wherever you go, I’ve made a podcast feed for you. Pop that in your podcatching software of choice. Here it is on Apple Podcasts. Here it is on Spotify.

And if you just can’t get enough of my voice, there’s always the Clearleft podcast …although that’s mostly other people talking, thank goodness.

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Wildlife Photographer Of The Year on the Clearleft podcast

Episode three of the Clearleft podcast is here!

This one is a bit different. Whereas previous episodes focused on specific topics—design systems, service design—this one is a case study. And, wow, what a case study! The whole time I was putting the episode together, I kept thinking “The team really did some excellent work here.”

I’m not sure what makes more sense: listen to the podcast episode first and then visit the site in question …or the other way around? Maybe the other way around. In which case, be sure to visit the website for Wildlife Photographer Of The Year.

That’s right—Clearleft got to work with London’s Natural History Museum! A real treat.

Myself and @dhuntrods really enjoyed our visit to the digitisation department in the Natural History Museum. Thanks, Jen, Josh, Robin, Phaedra, and @scuff_el!

This episode of the podcast ended up being half an hour long. It should probably be shorter but I just couldn’t bring myself to cut any of the insights that Helen, James, Chris, and Trys were sharing. I’m probably too close to the subject matter to be objective about it. I’m hoping that others will find it equally fascinating to hear about the process of the project. Research! Design! Dev! This has got it all.

I had a lot of fun with the opening of the episode. I wanted to create a montage effect like the scene-setting opening of a film that has overlapping news reports. I probably spent far too long doing it but I’m really happy with the final result.

And with this episode, we’re halfway through the first season of the podcast already! I figured a nice short run of six episodes is enough to cover a fair bit of ground and give a taste of what the podcast is aiming for, without it turning into an overwhelming number of episodes in a backlog for you to catch up with. Three down and three to go. Seems manageable, right?

Anyway, enough of the backstory. If you haven’t already subscribed to the Clearleft podcast, you should do that. Then do these three things in whichever order you think works best:

Modified machete

The Rise Of Skywalker arrives on Disney Plus on the fourth of May (a date often referred to as Star Wars Day, even though May 25th is and always will be the real Star Wars Day). Time to begin a Star Wars movie marathon. But in which order?

Back when there were a mere two trilogies, this was already a vexing problem if someone were watching the films for the first time. You could watch the six films in episode order:

  1. The Phantom Menace
  2. Attack Of The Clones
  3. Revenge Of The Sith
  4. A New Hope
  5. The Empire Strikes Back
  6. The Return Of The Jedi

But then you’re spoiling the grand reveal in episode five.

Alright then, how about release order?

  1. A New Hope
  2. The Empire Strikes Back
  3. Return Of The Jedi
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith

But then you’re front-loading the big pay-off, and you’re finishing with a big set-up.

This conundrum was solved with the machete order. It suggests omitting The Phantom Menace, not because it’s crap, but because nothing happens in it that isn’t covered in the first five minutes of Attack Of The Clones. The machete order is:

  1. A New Hope
  2. The Empire Strikes Back
  3. Attack Of The Clones
  4. Revenge Of The Sith
  5. Return Of The Jedi

It’s kind of brilliant. You get to keep the big reveal in The Empire Strikes Back, and then through flashback, you see how this came to be. Best of all, the pay-off in Return Of The Jedi has even more resonance because you’ve just seen Anakin’s downfall in Revenge Of The Sith.

With the release of the new sequel trilogy, an adjusted machete order is a pretty straightforward way to see the whole saga:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. The Last Jedi
  9. The Rise Of Skywalker

Done. But …what if you want to include the standalone films too?

If you slot them in in release order, they break up the flow:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. Rogue One
  9. The Last Jedi
  10. Solo
  11. The Rise Of Skywalker

I’m planning to watch all eleven films. This was my initial plan:

  1. Rogue One
  2. A New Hope
  3. The Empire Strikes Back
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith
  7. Solo
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

I definitely want to have Rogue One lead straight into A New Hope. The problem is where to put Solo. I don’t want to interrupt the Sith/Jedi setup/payoff.

So here’s my current plan, which I have already begun:

  1. Solo
  2. Rogue One
  3. A New Hope
  4. The Empire Strikes Back
  5. The Phantom Menace
  6. Attack Of The Clones
  7. Revenge Of The Sith
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

This way, the two standalone films work as world-building for the saga and don’t interrupt the flow once the main story is underway.

I think this works pretty well. Neither Solo nor Rogue One require any prior knowledge to be enjoyed.

And just in case you’re thinking that perhaps I’m overthinking it a bit and maybe I’ve got too much time on my hands …the world has too much time on its hands right now! Thanks to The Situation, I can not only take the time to plan and execute the viewing order for a Star Wars movie marathon, I can feel good about it. Stay home, they said. Literally saving lives, they said. Happy to oblige!

A reading of The Enormous Space by J.G. Ballard

Staying at home triggered a memory for me. I remembered reading a short story many years ago. It was by J.G. Ballard, and it described a man who makes the decision not to leave the house.

Being a J.G. Ballard story, it doesn’t end there. Over the course of the story, the house grows and grows in size, forcing the protaganist into ever-smaller refuges within his own home. It really stuck with me.

I tried tracking it down with some Duck Duck Going. Searching for “j.g. ballard weird short story” doesn’t exactly narrow things down, but eventually I spotted the book that I had read the story in. It was called War Fever. I think I read it back when I was living in Germany, so that would’ve been in the ’90s. I certainly don’t have a copy of the book any more.

But I was able to look up a table of contents and find a title for the story that was stuck in my head. It’s called The Enormous Space.

Alas, I couldn’t find any downloadable versions—War Fever doesn’t seem to be available for the Kindle.

Then I remembered the recent announcement from the Internet Archive that it was opening up the National Emergency Library. The usual limits on “checking out” books online are being waived while physical libraries remain closed.

I found The Complete Stories of J.G. Ballard and borrowed it just long enough to re-read The Enormous Space.

If anything, it’s creepier and weirder than I remembered. But it’s laced with more black comedy than I remembered.

I thought you might like to hear this story, so I made a recording of myself reading The Enormous Space.

Living Through The Future

You can listen to audio version of Living Through The Future.

Usually when we talk about “living in the future”, it’s something to do with technology: smartphones, satellites, jet packs… But I’ve never felt more like I’m living in the future than during The Situation.

On the one hand, there’s nothing particularly futuristic about living through a pandemic. They’ve occurred throughout history and this one could’ve happened at any time. We just happen to have drawn the short straw in 2020. Really, this should feel like living in the past: an outbreak of a disease that disrupts everyone’s daily life? Nothing new about that.

But there’s something dizzyingly disconcerting about the dominance of technology. This is the internet’s time to shine. Think you’re going crazy now? Imagine what it would’ve been like before we had our network-connected devices to keep us company. We can use our screens to get instant updates about technologies of world-shaping importance …like beds and face masks. At the same time as we’re starting to worry about getting hold of fresh vegetables, we can still make sure that whatever meals we end up making, we can share them instantaneously with the entire planet. I think that, despite William Gibson’s famous invocation, I always figured that the future would feel pretty futuristic all ‘round—not lumpy with old school matters rubbing shoulders with technology so advanced that it’s indistinguishable from magic.

When I talk about feeling like I’m living in the future, I guess what I mean is that I feel like I’m living at a time that will become History with a capital H. I start to wonder what we’ll settle on calling this time period. The Covid Point? The Corona Pause? 2020-P?

At some point we settled on “9/11” for the attacks of September 11th, 2001 (being a fan of ISO-8601, I would’ve preferred 2001-09-11, but I’ll concede that it’s a bit of a mouthful). That was another event that, even at the time, clearly felt like part of History with a capital H. People immediately gravitated to using historical comparisons. In the USA, the comparison was Pearl Harbour. Outside of the USA, the comparison was the Cuban missile crisis.

Another comparison between 2001-09-11 and what we’re currently experiencing now is how our points of reference come from fiction. Multiple eyewitnesses in New York described the September 11th attacks as being “like something out of a movie.” For years afterwards, the climactic showdowns in superhero movies that demolished skyscrapers no longer felt like pure escapism.

For The Situation, there’s no shortage of prior art to draw upon for comparison. If anthing, our points of reference should be tales of isolation like Robinson Crusoe. The mundane everyday tedium of The Situation can’t really stand up to comparison with the epic scale of science-fictional scenarios, but that’s our natural inclination. You can go straight to plague novels like Stephen King’s The Stand or Emily St. John Mandel’s Station Eleven. Or you can get really grim and cite Cormac McCarthy’s The Road. But you can go the other direction too and compare The Situation with the cozy catastrophes of John Wyndham like Day Of The Triffids (or just be lazy and compare it to any of the multitude of zombie apocalypses—an entirely separate kind of viral dystopia).

In years to come there will be novels set during The Situation. Technically they will be literary fiction—or even historical fiction—but they’ll feel like science fiction.

I remember the Chernobyl disaster having the same feeling. It was really happening, it was on the news, but it felt like scene-setting for a near-future dystopian apocalypse. Years later, I was struck when reading Wolves Eat Dogs by Martin Cruz-Smith. In 2006, I wrote:

Halfway through reading the book, I figured out what it was: Wolves Eat Dogs is a Cyberpunk novel. It happens to be set in present-day reality but the plot reads like a science-fiction story. For the most part, the book is set in the post-apocolyptic landscape of Prypiat, near Chernobyl. This post-apocolyptic scenario just happens to be real.

The protagonist, Arkady Renko, is sent to this frightening hellish place following a somewhat far-fetched murder in Moscow. Killing someone with a minute dose of a highly radioactive material just didn’t seem like a very realistic assassination to me.

Then I saw the news about Alexander Litvinenko, the former Russian spy who died this week, quite probably murdered with a dose of polonium-210.

I’ve got the same tingling feeling about The Situation. Fact and fiction are blurring together. Past, present, and future aren’t so easy to differentiate.

I really felt it last week standing in the back garden, looking up at the International Space Station passing overhead on a beautifully clear and crisp evening. I try to go out and see the ISS whenever its flight path intersects with southern England. Usually I’d look up and try to imagine what life must be like for the astronauts and cosmonauts on board, confined to that habitat with nowhere to go. Now I look up and feel a certain kinship. We’re all experiencing a little dose of what that kind of isolation must feel like. Though, as the always-excellent Marina Koren points out:

The more experts I spoke with for this story, the clearer it became that, actually, we have it worse than the astronauts. Spending months cooped up on the ISS is a childhood dream come true. Self-isolating for an indefinite period of time because of a fast-spreading disease is a nightmare.

Whenever I look up at the ISS passing overhead I feel a great sense of perspective. “Look what we can do!”, I think to myself. “There are people living in space!”

Last week that feeling was still there but it was tempered with humility. Yes, we can put people in space, but here we are with our entire way of life put on pause by something so small and simple that it’s technically not even a form of life. It’s like we’re the martians in H.G. Wells’s War Of The Worlds; all-conquering and formidable, but brought low by a dose of dramatic irony, a Virus Ex Machina.

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.