Journal tags: ia

169

sparkline

Lovers in a dangerous time

Being in Croatia last week got me thinking about the country’s history.

I remember the break-up of Yugoslavia, but I was quite out of touch with the news for a while back in 1991. That’s because I was hitch-hiking and busking around Europe with my musical partner Polly from Cornwall. I had my mandolin, she had her fiddle.

We went from Ireland to England to France to Germany to Czechoslovakia (still a single country back then), to Austria to Italy, back to France, and back to England. A loop around Europe.

We set off on August 21st, 1991. The only reason I know the date is because I remember we had been to a gig in Cork the night before.

Sonic Youth were playing in Sir Henry’s (a great venue that no longer exists). The support band was a group from Seattle called Nirvana. I remember that some of my friends decided to skip the support band to stay in the pub next door until Sonic Youth came on because the pints were cheaper there.

By the time Polly and I got back from our travels, Nirvana were the biggest band on the planet. It all happened very quickly.

The same could be said for the situation in Yugoslavia.

I remember when we were stuck for a day at a petrol station in the alps trying to get from Austria to Italy. There was a bureau de change listing currency exchange rates. This was before the euro came in so there were lots of different currencies; pounds, francs, lira, deutsche marks. Then there was the listing for the Yugoslav dinar. It read:

  • We buy: 00.00
  • We sell: 00.00

That really struck me, seeing the situation summarised so clinically.

But what really got to me was an encounter in Vienna.

Polly and I did well in that city. On our first evening of busking, not only did we make some good money, but we also met a local folk singer. This young man very generously took us in and put us up in his flat.

At some point during our stay, we were on one of the city’s trams. That’s when we met another young couple who were on the road. Somehow there was always a connection between fellow travellers. I can’t remember who spoke to who first, but we bonded straight away.

It soon became clear that our situations were only superfically similar. This was a young couple deeply in love. One of them was Serbian. The other was Croatian. It wasn’t safe for either of them back where they used to call home.

I could return home at any point. I always knew that when I was sleeping rough, or struggling to make enough money to eat.

They couldn’t return. All they wanted was to be together somewhere safe. They started asking us about Ireland and England. “Do you think they’d give us asylum?” they asked with so much hope. It broke my heart to see their desperation, the pleading look in their eyes.

I felt so useless. I wished there was something I could’ve done for them.

I think about them a lot.

Travels

He drew a deep breath. ‘Well, I’m back,’ he said.

I know how you feel, Samwise Gamgee.

I have returned from my travels—a week aboard the Queen Mary 2 crossing the Atlantic, followed by a weekend in New York, finishing with a week in Saint Augustine, Florida.

The Atlantic crossing was just as much fun as last time. In fact it was better because this time Jessica and I got to share the experience with our dear friends Dan and Sue.

There was dressing up! There was precarious ballet! There were waves! There were even some dolphins!

The truth is that this kind of Atlantic crossing is a bit like cosplaying a former age of travel. You get out of it what you put it into it. If you’re into LARPing as an Edwardian-era traveller, you’re going to have a good time.

We got very into it. Dressing up for dinner. Putting on a tux for the gala night. Donning masks for the masquerade evening.

Me and Jessica all dressed up wearing eye masks. Dan and Sue in wild outfits wearing eye masks.

It’s actually quite a practical way of travelling if you don’t mind being cut off from all digital communication for a week (this is a feature, not a bug). You adjust your clock by one hour most nights so that by the time you show up in New York, you’re on the right timezone with zero jetlag.

That was just as well because we had a packed weekend of activities in New York. By pure coincidence, two separate groups of friends were also in town from far away. We all met up and had a grand old time. Brunch in Tribeca; a John Cale concert in Prospect Park; the farmer’s market in Union Square; walking the high line …good times with good friends.

A brunch table with me and eight friends all smiling.

New York was hot, but not as hot as what followed in Florida. A week lazing about on Saint Augustine beach. I ate shrimp every single day. I regret nothing.

A sandy beach with gentle waves crashing under a blue sky with wisps of cloud.

We timed our exit just right. We flew out of Florida before the tropical storm hit. Then we landed in Gatwick right before the air-traffic control chaos erupted.

I had one day of rest before going back to work.

Well, I say “work”, but the first item in my calendar was speaking at Web Summer Camp in Croatia. Back to the airport.

The talk went well, and I got to attend a performance workshop by Harry. But best of all was the location. Opatija is an idyllic paradise. Imagine crossing a web conference with White Lotus, but in a good way. It felt like a continuation of Florida, but with more placid clear waters.

A beautiful old town interspersed with lush greenery sweeps down to a tranquil bay with blue/green water.

But now I’m really back. And fortunately the English weather is playing along by being unseasonably warm . It’s as if the warm temperatures are following me around. I like it.

The syndicate

Social networks come and social networks go.

Right now, there’s a whole bunch of social networks coming (Blewski, Freds, Mastication) and one big one going, thanks to Elongate.

Me? I watch all of this unfold like Doctor Manhattan on Mars. I have no great connection to any of these places. They’re all just syndication endpoints to me.

I used to have a checkbox in my posting interface that said “Twitter”. If I wanted to add a copy of one of my notes to Twitter, I’d enable that toggle.

I have, of course, now removed that checkbox. Twitter is dead to me (and it should be dead to you too).

I used to have another checkbox next to that one that said “Flickr”. If I was adding a photo to one of my notes, I could toggle that to send a copy to my Flickr account.

Alas, that no longer works. Flickr only allows you to post 1000 photos before requiring a pro account. Fair enough. I’ve actually posted 20 times that amount since 2005, but I let my pro membership lapse a while back.

So now I’ve removed the “Flickr” checkbox too.

Instead I’ve now got a checkbox labelled “Mastodon” that sends a copy of a note to my Mastodon account.

When I publish a blog post like the one you’re reading now here on my journal, there’s yet another checkbox that says “Medium”. Toggling that checkbox sends a copy of my post to my page on Ev’s blog.

At least it used to. At some point that stopped working too. I was going to start debugging my code, but when I went to the documentation for the Medium API, I saw this:

This repository has been archived by the owner on Mar 2, 2023. It is now read-only.

I guessed I missed the memo. I guess Medium also missed the memo, because developers.medium.com is still live. It proudly proclaims:

Medium’s Publishing API makes it easy for you to plug into the Medium network, create your content on Medium from anywhere you write, and expand your audience and your influence.

Not a word of that is accurate.

That page also has a link to the Medium engineering blog. Surely the announcement of the API deprecation would be published there?

Crickets.

Moving on…

I have an account on Bluesky. I don’t know why.

I was idly wondering about sending copies of my notes there when I came across a straightforward solution: micro.blog.

That’s yet another place where I have an account. They make syndication very straightfoward. You can go to your account and point to a feed from your own website.

That’s it. Syndication enabled.

It gets better. Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It’s like dominoes falling: I post something on my website which updates my RSS feed which gets picked up by micro.blog which passes it on to Bluesky.

I noticed that one of the other services that micro.blog can post to is Medium. Hmmm …would that still work given the abandonment of the API?

I gave permission to micro.blog to cross-post to Medium when my feed of blog posts is updated. It seems to have worked!

We’ll see how long it lasts. We’ll see how long any of them last. Today’s social media darlings are tomorrow’s Friendster and MySpace.

When the current crop of services wither and die, my own website will still remain in full bloom.

Remote

Before The Situation, I used to work in the Clearleft studio quite a bit. Maybe I’d do a bit of work at home for an hour or two before heading in, but I’d spend most of my working day with my colleagues.

That all changed three years ago:

Clearleft is a remote-working company right now. I mean, that’s hardly surprising—just about everyone I know is working from home.

Clearleft has remained remote-first. We’ve still got our studio space, though we’ve cut back to just having one floor. But most of the time people are working from home. I still occasionally pop into the studio—I’m actually writing this in the studio right now—but mostly I work out of my own house.

It’s funny how the old ways of thinking have been flipped. If I want to get work done, I stay home. If I want to socialise, I go into the studio.

For a lot of the work I do—writing, podcasting, some video calls, maybe some coding—my home environment works better than the studio. In the Before Times I’d have to put on headphones to block out the distractions of a humming workplace. Of course I miss the serendipitous chats with my co-workers but that’s why it’s nice to still have the option of popping into the studio.

Jessica has always worked from home. Our flat isn’t very big but we’ve got our own separate spaces so we don’t disturb one another too much.

For a while now we’ve been thinking that we could just as easily work from another country. I was inspired by a (video) chat I had with Luke when he casually mentioned that he was in Cypress. Why not? As long as the internet connection is good, the location doesn’t make any difference to the work.

So Jessica and I spent the last week working in Ortygia, Sicily.

It was pretty much the perfect choice. It’s not a huge bustling city. In fact it was really quiet. But there was still plenty to explore—winding alleyways, beautiful old buildings, and of course plenty of amazing food.

The time difference was just one hour. We used the extra hour in the morning to go to the market to get some of the magnificent local fruits and vegetables to make some excellent lunches.

We made sure that we found an AirBnB place with a good internet connection and separate workspaces. All in all, it worked out great. And because we were there for a week, we didn’t feel the pressure to run around to try to see everything.

We spent the days working and the evenings having a nice sundowner appertivo followed by some pasta or seafood.

It was simultaneously productive and relaxing.

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

Portability

Exactly sixteen years ago on this day, I wrote about Twitter, a service I had been using for a few weeks. I documented how confusing yet compelling it was.

Twitter grew and grew after that. But at some point, it began to feel more like it was shrinking, shrivelling into a husk of its former self.

Just over ten years ago, there was a battle for the soul of Twitter from within. One camp wanted it to become an interoperable protocol, like email. The other camp wanted it to be a content farm, monetised by advertisers. That’s the vision that won. They declared war on the third-party developers who had helped grow Twitter in the first place, and cracked down on anything that didn’t foster e N g A g E m E n T.

The muskofication of Twitter is the nail in the coffin. In the tradition of all scandals since Watergate, I propose we refer to the shocking recent events at Twitter as Elongate.

Post-Elongate Twitter will limp on, I’m sure, but it can never be the fun place it once was. The incentives just aren’t there. As Bastian wrote:

Twitter was once an amplifier for brilliant ideas, for positivity, for change, for a better future. Many didn’t understand the power it had as a communication platform. But that power turned against the exact same people who needed this platform so urgently. It’s now a waste of time and energy at best and a threat to progress and society at worst.

I don’t foresee myself syndicating my notes to Twitter any more. I’ve removed the site from my browser’s bookmarks. I’ve removed it from my phone’s home screen too.

As someone who’s been verified on Twitter for years, with over 140,000 followers, it should probably feel like a bigger deal than it does. I echo Robin’s observation:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

Meanwhile, Mastodon is proving to be thoroughly enjoyable. Some parts are still rough around the edges, but compared to Twitter in 2006, it’s positively polished.

Interestingly, the biggest complaint that I and my friends had about Twitter all those years ago wasn’t about Twitter per se, but about lock-in:

Twitter is yet another social network where we have to go and manually add all the same friends from every other social network.

That’s the very thing that sets the fediverse apart: the ability to move from one service to another and bring your social network with you. Now Matt is promising to add ActivityPub to Tumblr. That future we wanted sixteen years ago might finally be arriving.

That fediverse feeling

Right now, Twitter feels like Dunkirk beach in May 1940. And look, here comes a plucky armada of web servers running Mastodon instances!

Others have written some guides to getting started on Mastodon:

There are also tools like Twitodon to help you migrate from Twitter to Mastodon.

Getting on board isn’t completely frictionless. Understanding how Mastodon works can be confusing. But then again, so was Twitter fifteen years ago.

Right now, many Mastodon instances are struggling with the influx of new sign-ups. But this is temporary. And actually, it’s also very reminiscent of the early unreliable days of Twitter.

I don’t want to go into the technical details of Mastodon and the fediverse—even though those details are fascinating and impressive. What I’m really struck by is the vibe.

In a nutshell, I’m loving it! It feels …nice.

I was fully expecting Mastodon to be full of meta-discussions about Mastodon, but in the past few weeks I’ve enjoyed people posting about stone circles, astronomy, and—obviously—cats and dogs.

The process of finding people to follow has been slow, but in a good way. I’ve enjoyed seeking people out. It’s been easier to find the techy folks, but I’ve also been finding scientists, journalists, and artists.

On the one hand, the niceness of the experience isn’t down to technical architecture; it’s all about the social norms. On the other hand, those social norms are very much directed by technical decisions. The folks working on the fediverse for the past few years have made very thoughtful design decisions to amplify niceness and discourage nastiness. It’s all very gratifying to experience!

Personally, I’m posting to Mastodon via my own website. As much as I’m really enjoying Mastodon, I still firmly believe that nothing beats having control of your own content on your domain.

But I also totally get that not everyone has the same set of priorities as me. And frankly, it’s unrealistic to expect everyone to have their own domain name.

It’s like there’s a spectrum of ownership. On one end, there’s publishing on your own website. On the other end, there’s publishing on silos like Twitter, Facebook, Medium, Instagram, and MySpace.

Publishing on Mastodon feels much closer to the website end of the spectrum than it does to the silo end of the spectrum. If something bad happens to the Mastodon instance you’re on, you can up and move to a different instance, taking your social graph with you.

In a way, it’s like delegating domain ownership to someone you trust. If you don’t have the time, energy, resources, or interest in having your own domain, but you trust someone who’s running a Mastodon instance, it’s the next best thing to publishing on your own website.

Simon described it well when he said Mastodon is just blogs:

A Mastodon server (often called an instance) is just a shared blog host. Kind of like putting your personal blog in a folder on a domain on shared hosting with some of your friends.

Want to go it alone? You can do that: run your own dedicated Mastodon instance on your own domain.

And rather than compare Mastodon to Twitter, Simon makes a comparison with RSS:

Do you still miss Google Reader, almost a decade after it was shut down? It’s back!

A Mastodon server is a feed reader, shared by everyone who uses that server.

Lots of other folks are feeling the same excitement in the air that I’m getting:

Bastian wrote:

Real conversations. Real people. Interesting content. A feeling of a warm welcoming group. No algorithm to mess around with our timelines. No troll army to destory every tiny bit of peace. Yes, Mastodon is rough around the edges. Many parts are not intuitive. But this roughness somehow added to the positive experience for me.

This could really work!

Brent Simmons wrote:

The web is wide open again, for the first time in what feels like forever.

I concur! Though, like Paul, I love not being beholden to either Twitter or Mastodon:

I love not feeling bound to any particular social network. This website, my website, is the one true home for all the stuff I’ve felt compelled to write down or point a camera at over the years. When a social network disappears, goes out of fashion or becomes inhospitable, I can happily move on with little anguish.

But like I said, I don’t expect everyone to have the time, means, or inclination to do that. Mastodon definitely feels like it shares the same indie web spirit though.

Personally, I recommend experiencing Mastodon through the website rather than a native app. Mastodon instances are progressive web apps so you can add them to your phone’s home screen.

You can find me on Mastodon as @adactio@mastodon.social

I’m not too bothered about what instance I’m on. It really only makes a difference to my local timeline. And if I do end up finding an instance I prefer, then I know that migrating will be quite straightforward, by design. Perhaps I should be on an instance with a focus on front-end development or the indie web. I still haven’t found much of an Irish traditional music community on the fediverse. I’m wondering if maybe I should start a Mastodon instance for that.

While I’m a citizen of mastodon.social, I’m doing my bit by chipping in some money to support it: sponsorship levels on Patreon start at just $1 a month. And while I can’t offer much technical assistance, I opened my first Mastodon pull request with a suggested improvement for the documentation.

I’m really impressed with the quality of the software. It isn’t perfect but considering that it’s an open source project, it’s better than most VC-backed services with more and better-paid staff. As Giles said, comparing it to Twitter:

I’m using Mastodon now and it’s not the same, but it’s not shit either. It’s different. It takes a bit of adjustment. And I’m enjoying it.

Most of all, I love, love, love that Mastodon demonstrates that things can be different. For too long we’ve been told that behavioural advertising was an intrinsic part of being online, that social networks must inevitably be monolithic centralised beasts, that we have to relinquish control to corporations in order to be online. The fediverse is showing us a better way. And this isn’t just a proof of concept either. It’s here now. It’s here to stay, if you want it.

Syndicating to Mastodon

I’ve been contemplating a checkbox. The label for this checkbox reads:

This is a bot account

Let me back up…

In what seems like decades ago, but was in fact just a few weeks, Elon Musk bought Twitter and began burning it to the ground. His admirers insist he’s playing some form of four-dimensional chess, but to the rest of us, his actions are indistinguishable from a spoilt rich kid not understanding what a social network is.

It wasn’t giving me much cause for anguish personally. For the past eight years, I’ve only used Twitter as a syndication endpoint for my own notes. But I understand that’s a very privileged position to be in. Most people on Twitter don’t have the same luxury of independence. It’s genuinely maddening and saddening to see their years of sharing destroyed by one cruel idiot.

Lots of people started moving to Mastodon. I figured I should do the same for my syndicated notes.

At first, I signed up for an account on mastodon.cloud. No particular reason. But that’s where I saw this very insightful post from Anil Dash:

When it came time to reckon with social media’s failings, nobody ran to the “web3” platforms. Nobody asked “can I get paid per message”? Nobody asked about the blockchain. The community of people who’ve been quietly doing this work for years (decades!) ended up being the ones who welcomed everyone over, as always.

I was getting my account all set up and beginning to follow some other folks, when I realised that I actually already had an exisiting account over on mastodon.social. Doh! Turns out that I signed up back in 2017 to kick the tyres, but never did much else because there weren’t many other people around back then. Oh, how times have changed!

Anyway, I thought I had really screwed up by having two accounts but this turned out to be an opportunity to experience some of the thoughtfulness in Mastodon’s design. The process of migrating from one Mastodon account to another—on a completely different instance—was very smooth! It was clear that this wasn’t an afterthought. This is an essential part of the fediverse and the design of the migration flow reflects that.

This gives me enormous peace of mind. If I ever want to switch to a different instance and still keep my network intact, I know it won’t be a problem. Mastodon is like the opposite of the roach-motel mentality that permeates most VC-backed so-called social networks.

As I played around some more—reading, following, exploring—my feelings of fondness only grew stronger. I like this place a lot!

I definitely wanted to syndicate my notes to Mastodon. At first, I implemented a straightforward RSS-to-Mastodon syndication using IFTTT (IF This, Then That), thanks to Matthias’s excellent tutorial.

But that didn’t feel quite right. When I syndicate to Twitter, I make a conscious choice each time. There’s a “Twitter” toggle that I can enable or disable in my posting interface. Mastodon deserved the same level of thoughtfulness.

So I switched off the IFTTT recipe and started exploring the Mastodon API. It’s going to sound like a humblebrag when I tell you that I got cross-posting working in almost no time at all, but that’s not a testament to my coding prowess (I’m really not very good), but rather a testament to the Mastodon API, which was a joy to work with.

  1. On your Mastodon instance, go to /settings/applications.
  2. Click on New Application.
  3. Fill in the details about your website and select write:statuses (and probably write:media) from the Scopes list.
  4. Copy Your access token to use in API calls.
  5. Write some sloppy code (in my case, PHP that uses CURL).

I did hit a wall when it came to posting images. That took me a while to get working, and I couldn’t figure out why. Was it something at Mastodon’s end while it was struggling under the influx of new users? As it turns out, no. It was entirely down to me being an idiot. (You know that situation where you’re working on a problem for ages and you’ve become convinced it’s an extremely gnarly rocket-science problem, but then turns out to be something stupid like a typo? Yeah. That.)

Then there’s the whole question of how to receive replies, likes, and reboosts from Mastodon here on my own site. Luckily, that was super easy, thanks to Brid.gy. One click and I was done. I love Brid.gy!

Take this note, for example. There’s a version on Twitter and a version on Mastodon. The original version on my own site gets responses from both places.

If I’m replying to a response on Twitter, I do not syndicate that to Mastodon.

Likewise, if I’m replying to a response on Mastodon, I do not syndicate that to Twitter.

Oh, one thing worth mentioning: if you’re sending a reply to something on Mastodon using the API, there’s an in_reply_to_id field for you to provide. But you should also include the full @username@instance of the person you’re replying to at the beginning of the message to ensure that it’s displayed as a reply rather than showing up as a regular post. Note the difference between this note on my site and its syndicated version on Mastodon.

Anyway, now I’m posting to Mastodon, but I’m doing it through the the interface of my own website. Which brings me to that checkbox in Mastodon’s profile settings:

This is a bot account

The help text reads:

Signal to others that the account mainly performs automated actions and might not be monitored

If I were doing the automatic cross-posting from RSS, I’d definitely tick that box. But as I’m making a conscious decision whenever I syndicate to Mastodon, I think I’m going to leave that checkbox unticked.

My cross-posting is not automated and I’m very much monitoring my Mastodon account …because I’m enjoying my Mastodon experience more than I’ve enjoyed anything online for quite some time. Highly recommended!

Overloading buttons

It’s been almost two years since I added audio playback on The Session. The interface is quite straightforward. For any tune setting, there’s a button that says “play audio”. When you press that button, audio plays and the button’s text changes to “pause audio.”

By updating the button’s text like this, I’m updating the button’s accessible name. In other situations, where the button text doesn’t change, you can indicate whether a button is active or not by toggling the aria-pressed attribute. I’ve been doing that on the “share” buttons that act as the interface for a progressive disclosure. The label on the button—“share”—doesn’t change when the button is pressed. For that kind of progressive disclosure pattern, the button also has an aria-controls and aria-expanded attribute.

From all the advice I’ve read about button states, you should either update the accessible name or change the aria-pressed attribute, but not both. That would lead to the confusing situation of having a button labelled “pause audio” as having a state of “pressed” when in fact the audio is playing.

That was all fine until I recently added some more functionality to The Session. As well as being able to play back audio, you can now adjust the tempo of the playback speed. The interface element for this is a slider, input type="range".

But this means that the “play audio” button now does two things. It plays the audio, but it also acts as a progressive disclosure control, revealing the tempo slider. The button is simultaneously a push button for playing and pausing music, and a toggle button for showing and hiding another interface element.

So should I be toggling the aria-pressed attribute now, even though the accessible name is changing? Or is it enough to have the relationship defined by aria-controls and the state defined by aria-expanded?

Based on past experience, my gut feeling is that I’m probably using too much ARIA. Maybe it’s an anti-pattern to use both aria-expanded and aria-pressed on a progressive disclosure control.

I’m kind of rubber-ducking here, and now that I’ve written down what I’m thinking, I’m pretty sure I’m going to remove the toggling of aria-pressed in any situation where I’m already toggling aria-expanded.

What I really need to do is enlist the help of actual screen reader users. There are a number of members of The Session who use screen readers. I should get in touch and see if the new functionality makes sense to them.

In person

I’ve had the opportunity to gather with my peers a few times over the past couple of months.

There was dConstruct, which I hosted. That was just lovely.

Then a few weeks ago, in spite of train strikes and travel snags, I went to Bristol to give a talk at Web Dev Conf, a really nice gathering.

This past weekend I was in London for State Of The Browser, this time as neither host nor speak, but as an attendee. It was really good!

I noticed something rather lovely. There was enough cross-over in the audiences for these events that I got to see some people more than once. That’s something that used to happen all the time but became very rare over the past two years because of The Situation.

None of the organisers of these events were pretending that Covid has gone away. Each event had different processes in place to mitigate risk. I wrote about the steps I took for dConstruct. For some people, those measures might seem to go too far. For other people, they don’t go far enough. This is a challenge that every in-person event is facing and from what I’ve seen, they’re all doing their level best.

None of these events were particularly large. Attendence was maybe somewhere between 100 and 200 people at each one. I know that there’s still a risk in any kind of indoor gathering but these events feel safer than the really big tech gatherings (like the one in Berlin where I got the ’rona—that was literally tens of thousands of people).

Anyway, all three events were thoroughly enjoyable. Partly that’s because the talks were good, but also because the socialising was really, really nice—all the nicer for being in relatively safe environments.

It’s not exactly an earth-shattering observation to point out that the social side of conferences is just as valuable as the content. But now that so many of us are working remotely, I feel like that aspect of in-person events has become even more important.

Or maybe I’m just appreciating that aspect of in-person events after spending such a long time with screen-mediated interactions only.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

Control

In two of my recent talks—In And Out Of Style and Design Principles For The Web—I finish by looking at three different components:

  1. a button,
  2. a dropdown, and
  3. a datepicker.

In each case you could use native HTML elements:

  1. button,
  2. select, and
  3. input type="date".

Or you could use divs with a whole bunch of JavaScript and ARIA.

In the case of a datepicker, I totally understand why you’d go for writing your own JavaScript and ARIA. The native HTML element is quite restricted, especially when it comes to styling.

In the case of a dropdown, it’s less clear-cut. Personally, I’d use a select element. While it’s currently impossible to style the open state of a select element, you can style the closed state with relative ease. That’s good enough for me.

Still, I can understand why that wouldn’t be good enough for some cases. If pixel-perfect consistency across platforms is a priority, then you’re going to have to break out the JavaScript and ARIA.

Personally, I think chasing pixel-perfect consistency across platforms isn’t even desirable, but I get it. I too would like to have more control over styling select elements. That’s one of the reasons why the work being done by the Open UI group is so important.

But there’s one more component: a button.

Again, you could use the native button element, or you could use a div or a span and add your own JavaScript and ARIA.

Now, in this case, I must admit that I just don’t get it. Why wouldn’t you just use the native button element? It has no styling issues and the browser gives you all the interactivity and accessibility out of the box.

I’ve been trying to understand the mindset of a developer who wouldn’t use a native button element. The easy answer would be that they’re just bad people, and dismiss them. But that would probably be lazy and inaccurate. Nobody sets out to make a website with poor performance or poor accessibility. And yet, by choosing not to use the native HTML element, that’s what’s likely to happen.

I think I might have finally figured out what might be going on in the mind of such a developer. I think the issue is one of control.

When I hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, I think “Great! That’s less work for me. I can just let the browser deal with it.” In other words, I relinquish control to the browser (though not entirely—I still want the styling to be under my control as much as possible).

But I now understand that someone else might hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, and think “Uh-oh! What if there unexpected side-effects of these built-in behaviours that might bite me on the ass?” In other words, they don’t trust the browsers enough to relinquish control.

I get it. I don’t agree. But I get it.

If your background is in computer science, then the ability to precisely predict how a programme will behave is a virtue. Any potential side-effects that aren’t within your control are undesirable. The only way to ensure that an interface will behave exactly as you want is to write it entirely from scratch, even if that means using more JavaScript and ARIA than is necessary.

But I don’t think it’s a great mindset for the web. The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.

Still, I’m glad that I now have a bit more insight into why someone would choose to attempt to retain control by using div, JavaScript and ARIA. It’s not what I would do, but I think I understand the motivation a bit better now.

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

Situational awereness

There was a week recently where I was out and about nearly every night.

One night, Jessica and I went to the cinema. There was a double bill of Alien and Aliens in the beautiful Duke of York’s picture house. We booked one of the comfy sofas on the balcony.

The next night we were out at the session in The Jolly Brewer, playing trad Irish tunes all evening. Bliss!

Then on the third night, we went to see Low playing in a church. Rich and Ben were there too.

It really felt like The Before Times. Of course in reality it wasn’t quite like old times. There’s always an awareness of relative risk. How crowded is the cinema likely to be? Will they have the doors open at The Jolly Brewer to improve the airflow? Will people at the Low gig comply with the band’s request to wear masks?

Still, in each case, I weighed the risk and decided the evening was worth it. If I caught Covid because of that cinematic double bill, or that tune-filled gathering, or that excellent gig, that price would be acceptable.

Mind you, I say that without having experienced the horribleness of having a nasty bout of coronavirus. And the prospect of long Covid is genuinely scary.

But there’s no doubt that the vaccines have changed the equation. There’s still plenty of risk but it’s on a different scale. The Situation isn’t over, but it has ratcheted down a notch to something more manageable.

Now with the weather starting to get nice, there’ll be more opportunities for safer outdoor gatherings. I’m here for it.

Actually, I’m not going to literally be here for all of it. I’m making travel plans to go and speak at European events—another positive signal of the changing situation. Soon I’ll be boarding the Eurostar to head to Amsterdam, and not long after I’ll be on the Eurostar again for a trip to Lille. And then of course there’s UX London at the end of June. With each gathering, there’s an inevitable sense of calculated risk, but there’s also a welcome sense of normality seeping back in.

Declarative design

I feel like in the past few years there’s been a number of web design approaches that share a similar mindset. Intrinsic web design by Jen; Every Layout by Andy and Heydon; Utopia by Trys and James.

To some extent, their strengths lie in technological advances in CSS: flexbox, grid, calc, and so on. But more importantly, they share an approach. They all focus on creating the right inputs rather than trying to control every possible output. Leave the final calculations for those outputs to the browser—that’s what computers are good at.

As Andy puts it:

Be the browser’s mentor, not its micromanager.

Reflecting on Utopia’s approach, Jim Nielsen wrote:

We say CSS is “declarative”, but the more and more I write breakpoints to accommodate all the different ways a design can change across the viewport spectrum, the more I feel like I’m writing imperative code. At what quantity does a set of declarative rules begin to look like imperative instructions?

In contrast, one of the principles of Utopia is to be declarative and “describe what is to be done rather than command how to do it”. This approach declares a set of rules such that you could pick any viewport width and, using a formula, derive what the type size and spacing would be at that size.

Declarative! Maybe that’s the word I’ve been looking for to describe the commonalities between Utopia, Every Layout, and intrinsic web design.

So if declarative design is a thing, does that also mean imperative design is also a thing? And what might the tools and technologies for imperative design look like?

I think that Tailwind might be a good example of an imperative design tool. It’s only about the specific outputs. Systematic thinking is actively discouraged; instead you say exactly what you want the final pixels on the screen to be.

I’m not saying that declarative tools—like Utopia—are right and that imperative tools—like Tailwind—are wrong. As always, it depends. In this case, it depends on the mindset you have.

If you agree with this statement, you should probably use an imperative design tool:

CSS is broken and I want my tools to work around the way CSS has been designed.

But if you agree with this statement, you should probably use a declarative design tool:

CSS is awesome and I want my tools to amplify the way that CSS had been designed.

If you agree with the first statement but you then try using a declarative tool like Utopia or Every Layout, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

Likewise if you agree with the second statement but you then try using an imperative tool like Tailwind, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

It all depends on whether the philosophy behind the tool matches your own philosophy. If those philosophies match up, then using the tool will be productive and that tool will act as an amplifier—a bicycle for the mind. But if the philosophy of the tool doesn’t match your own philosophy, then you will be fighting the tool at every step—it will slow you down.

Knowing that this spectrum exists between declarative tools and imperative tools can help you when you’re evaluating technology. You can assess whether a web design tool is being marketed on the premise that CSS is broken or on the premise that CSS is awesome.

I wonder whether your path into web design and development might also factor into which end of the spectrum you’d identify with. Like, if your background is in declarative languages like HTML and CSS, maybe intrisic web design really resonates. But if your background is in imperative languages like JavaScript, perhaps Tailwind makes more sense to you.

Again, there’s no right or wrong here. This is about matching the right tool to the right mindset.

Personally, the declarative design approach fits me like a glove. It feels like it’s in the tradition of John’s A Dao Of Web Design or Ethan’s Responsive Web Design—ways of working with the grain of the web.

69420

This is going to make me sound like an old man in his rocking chair on the front porch, but let me tell you about the early days of Twitter…

The first time I mentioned Twitter on here was back in November 2006:

I’ve been playing around with Twitter, a neat little service from the people who brought you Odeo. You send it little text updates via SMS, the website, or Jabber.

A few weeks later, I wrote about some of its emergent properties:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

That’s right; back then we didn’t have the verb “tweeting” yet.

In those early days, some of the now-ubiquitous interactions had yet to emerge. Chris hadn’t yet proposed hashtags. And if you wanted to address a message to a specific person—or reply to a tweet of theirs—the @ symbol hadn’t been repurposed for that. There were still few enough people on Twitter that you could just address someone by name and they’d probably see your message.

That’s what I was doing when I posted:

It takes years off you, Simon.

I’m assuming Simon Willison got a haircut or something.

In any case, it’s an innocuous and fairly pointless tweet. And yet, in the intervening years, that tweet has received many replies. Weirdly, most of the replies consisted of one word:

nice

Very puzzling.

Then a little while back, I realised what was happening. This is the URL for my tweet:

twitter.com/adactio/status/69420

69420.

69.

420.

Pesky kids with their stoner sexual-innuendo numerology!

Media queries with display-mode

It’s said that the best way to learn about something is to teach it. I certainly found that to be true when I was writing the web.dev course on responsive design.

I felt fairly confident about some of the topics, but I felt somewhat out of my depth when it came to some of the newer modern additions to browsers. The last few modules in particular were unexplored areas for me, with topics like screen configurations and media features. I learned a lot about those topics by writing about them.

Best of all, I got to put my new-found knowledge to use! Here’s how…

The Session is a progressive web app. If you add it to the home screen of your mobile device, then when you launch the site by tapping on its icon, it behaves just like a native app.

In the web app manifest file for The Session, the display-mode property is set to “standalone.” That means it will launch without any browser chrome: no address bar and no back button. It’s up to me to provide the functionality that the browser usually takes care of.

So I added a back button in the navigation interface. It only appears on small screens.

Do you see the assumption I made?

I figured that the back button was most necessary in the situation where the site had been added to the home screen. That only happens on mobile devices, right?

Nope. If you’re using Chrome or Edge on a desktop device, you will be actively encourged to “install” The Session. If you do that, then just as on mobile, the site will behave like a standalone native app and launch without any browser chrome.

So desktop users who install the progressive web app don’t get any back button (because in my CSS I declare that the back button in the interface should only appear on small screens).

I was alerted to this issue on The Session:

It downloaded for me but there’s a bug, Jeremy - there doesn’t seem to be a way to go back.

Luckily, this happened as I was writing the module on media features. I knew exactly how to solve this problem because now I knew about the existence of the display-mode media feature. It allows you to write media queries that match the possible values of display-mode in a web app manifest:

.goback {
  display: none;
}
@media (display-mode: standalone) {
  .goback {
    display: inline;
  }
}

Now the back button shows up if you “install” The Session, regardless of whether that’s on mobile or desktop.

Previously I made the mistake of inferring whether or not to show the back button based on screen size. But the display-mode media feature allowed me to test the actual condition I cared about: is this user navigating in standalone mode?

If I hadn’t been writing about media features, I don’t think I would’ve been able to solve the problem. It’s a really good feeling when you’ve just learned something new, and then you immediately find exactly the right use case for it!