Journal tags: arc

84

sparkline

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

The past is a foreign country

I tried watching a classic Western this weekend, How The West Was Won. I did not make it far. Let’s just say that in the first few minutes, the Spencer Tracy voiceover that accompanies the sweeping vistas sets out an attitude toward the indigenous population that would not fly today.

It’s one thing to be repulsed by a film from another era, but it’s even more uncomfortable to revisit the films from your own teenage years.

Tim Carmody has written about the real hero of Top Gun:

Iceman’s concern for Maverick and the safety of his fighter unit is totally understandable. He tries, however awkwardly, to discuss Goose’s death with Maverick. There’s no discussion of blame. And when they’re assigned to fly into combat together, Iceman briefly and discreetly raises the issue of Maverick’s fitness to fly with his superior officer and withdraws his concern once a decision is made.

I know someone who didn’t watch Ferris Bueller’s Day Off until they were well into adulthood. Their sympathies lay squarely with Dean Rooney.

And I think we can all agree in hindsight that Walter Peck was completely correct in his assessment of the dangers in Ghostbusters.

Oh, and The Karate Kid was the real bully.

This week, George wrote I’ve fallen out of love with Indiana Jones. Indy’s attitude of “it belongs in a museum” is the same worldview that got the Parthenon Marbles into the British Museum (instead of, y’know, the Parthenon where they belong).

Adrian Hon invites us to imagine what it would be like if the tables were turned. He wrote a short piece of speculative fiction called The Taking of Stonehenge:

We selected these archaeological sites based on their importance to our collective understanding of human and galactic history, and their immediate risk of irreparable harm from pollution, climate change, neglect, and looting. We are sympathetic to claims that preserving these sites in their “original” context is important, but our duty of care outweighs such emotional considerations.

These were my jams

This Is My Jam was a lovely website. Created by Hannah and Matt in 2011, it ran until 2015, at which point they had to shut it down. But they made sure to shut it down with care and consideration.

In many ways, This Is My Jam was the antithesis of the prevailing Silicon Valley mindset. Instead of valuing growth and scale above all else, it was deliberately thoughtful. Rather than “maximising engagement”, it asked you to slow down and just share one thing: what piece of music are you really into right now? It was up to you to decide whether “right now” meant this year, this month, this week, or this day.

I used to post songs there sporadically. Here’s a round-up of the twelve songs I posted in 2013. There was always some reason for posting a particular piece of music.

I was reminded of This Is My Jam recently when I logged into Spotify (not something I do that often). As part of the site’s shutdown, you could export all your jams into a Spotify playlist. Here’s mine.

Listening back to these 50 songs all these years later gave me the warm fuzzies.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

Tweaking navigation sizing

Gerry talks about “top tasks” a lot. He literally wrote the book on it:

Top tasks are what matter most to your customers.

Seems pretty obvious, right? But it’s actually pretty rare to see top tasks presented any differently than other options.

Look at the global navigation on most websites. Typically all the options are given equal prominence. Even the semantics under the hood often reflect this egalitarian ideal, with each list in an unordered list. All the navigation options are equal, but I bet that the reality for most websites is that some navigation options are more equal than others.

I’ve been guilty of this on The Session. The site-wide navigation shows a number of options: tunes, events, discussions, etc. Each one is given equal prominence, but I can tell you without even looking at my server logs that 90% of the traffic goes to the tunes section—that’s the beating heart of The Session. That’s why the home page has a search form that defaults to searching for tunes.

I wanted the navigation to reflect the reality of what people are coming to the site for. I decided to make the link to the tunes section more prominent by bumping up the font size a bit.

I was worried about how weird this might look; we’re so used to seeing all navigation items presented equally. But I think it worked out okay (though it might take a bit of getting used to if you’re accustomed to the previous styling). It helps that “tunes” is a nice short word, so bumping up the font size on that word doesn’t jostle everything else around.

I think this adjustment is working well for this situation where there’s one very clear tippy-top task. I wouldn’t want to apply it across the board, making every item in the navigation proportionally bigger or smaller depending on how often it’s used. That would end up looking like a ransom note.

But giving one single item prominence like this tweaks the visual hierarchy just enough to favour the option that’s most likely to be what a visitor wants.

That last bit is crucial. The visual adjustment reflects what visitors want, not what I want. You could adjust the size of a navigation option that you want to drive traffic to, but in the long run, all you’re going to do is train people to trust your design less.

You don’t get to decide what your top task is. The visitors to your website do. Trying to foist an arbitrary option on them would be the tail wagging the dog.

Anway, I’m feeling a lot better about the site-wide navigation on The Session now that it reflects reality a little bit more. Heck, I may even bump that font size up a little more.

The audio from dConstruct 2022

dConstruct 2022 was great fun. It was also the last ever dConstruct.

If you were there, and you’d like to re-live the magic, the audio from the talks is now available on the dConstruct Archive. Here they are:

Thanks to some service worker magic, you can select any of those talks for offline listening later.

The audio is also available on Huffduffer on the dConstruct Huffduffer account. Here’s the RSS feed that you can pop into your podcast software of choice.

If you’re more of a visual person, you can watch videos of the slides synced with the audio. They’ve all got captions too (good ones, not just automatically generated).

So have a listen in whichever way you prefer.

Now that I’ve added the audio from the last dConstruct to the dConstruct archive, it feels like the closing scene of Raiders Of The Lost Ark. Roll credits.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

Even more UX London speaker updates

I’ve added five more faces to the UX London line-up.

Irina Rusakova will be giving a talk on day one, the day that focuses on research. Her talk on designing with the autistic community is one I’m really looking forward to.

Also on day one, my friend and former Clearleftie Cennydd Bowles will be giving a workshop called “What could go wrong?” He literally wrote the book on ethical design.

Day two is all about creation. My co-worker Chris How will be speaking. “Nepotism!” you cry! But no, Chris is speaking because I had the chance to his talk—called “Unexpectedly obvious”—and I thought “that’s perfect for UX London!”:

Let him take you on a journey through time and across the globe sharing stories of designs that solve problems in elegant if unusual ways.

Also on day two, you’ve got two additional workshops. Lou Downe will be running a workshop on designing good services, and Giles Turnbull will be running a workshop called “Writing for people who hate writing.”

I love that title! Usually when I contact speakers I don’t necessarily have a specific talk or workshop in mind, but I knew that I wanted that particular workshop from Giles.

When I wrote to Giles to ask come and speak, I began by telling how much I enjoy his blog—I’m a long-time suscriber to his RSS feed. He responded and said that he also reads my blog—we’re blog buddies! (That’s a terrible term but there should be a word for people who “know” each other only through reading each other’s websites.)

Anyway, that’s another little treasure trove of speakers added to the UX London roster:

That’s nineteen speakers already and we’re not done yet—expect further speaker announcements soon. But don’t wait on those announcements before getting your ticket. Get yours now!

More UX London speaker updates

It wasn’t that long ago that I told you about some of the speakers that have been added to the line-up for UX London in June: Steph Troeth, Heldiney Pereira, Lauren Pope, Laura Yarrow, and Inayaili León. Well, now I’ve got another five speakers to tell you about!

Aleks Melnikova will be giving a workshop on day one, June 28th—that’s the day with a focus on research.

Stephanie Marsh—who literally wrote the book on user research—will also be giving a workshop that day.

Before those workshops though, you’ll get to hear a talk from the one and only Kat Zhou, the creator of Design Ethically. By the way, you can hear Kat talking about deceptive design in a BBC radio documentary.

Day two has a focus on content design so who better to deliver a workshop than Sarah Winters, author of the Content Design book.

Finally, on day three—with its focus on design systems—I’m thrilled to announce that Adekunle Oduye will be giving a talk. He too is an author. He co-wrote the Design Engineering Handbook. I also had the pleasure of talking to Adekunle for an episode of the Clearleft podcast on design engineering.

So that’s another five excellent speakers added to the line-up:

That’s a total of fifteen speakers so far with more on the way. And I’ll be updating the site with more in-depth descriptions of the talks and workshops soon.

If you haven’t yet got your ticket for UX London, grab one now. You can buy tickets for individual days, or to get the full experience and the most value, get a ticket for all three days.

UX London speaker updates

If you’ve signed up to the UX London newsletter then this won’t be news to you, but more speakers have been added to the line-up.

Steph Troeth will be giving a workshop on day one. That’s the day with a strong focus on research, and when it comes to design research, Steph is unbeatable. You can hear some of her words of wisdom in an episode of the Clearleft podcast all about design research.

Heldiney Pereira will be speaking on day two. That’s the day with a focus on content design. Heldiney previously spoke at our Content By Design event and it was terrific—his perspective on content design as a product designer is invaluable.

Lauren Pope will also be on day two. She’ll be giving a workshop. She recently launched a really useful content audit toolkit and she’ll be bringing that expertise to her UX London workshop.

Day three is going to have a focus on design systems (and associated disciplines like design engineering and design ops). Both Laura Yarrow and Inayaili León will be giving talks on that day. You can expect some exciting war stories from the design system trenches of HM Land Registry and GitHub.

I’ve got some more speakers confirmed but I’m going to be a tease and make you wait a little longer for those names. But check out the line-up so far! This going to be such an excellent event (I know I’m biased, but really, look at that line-up!).

June 28th to 30th. Tobacco Dock, London. Get your ticket if you haven’t already.

Curating UX London 2022

The first speakers are live on the UX London 2022 site! There are only five people announced for now—just enough to give you a flavour of what to expect. There will be many, many more.

Putting together the line-up of a three-day event is quite challenging, but kind of fun too. On the one hand, each day should be able to stand alone. After all, there are one-day tickets available. On the other hand, it should feel like one cohesive conference, not three separate events.

I’ve decided to structure the three days to somewhat mimic the design process…

The first day is all about planning and preparation. This is like the first diamond in the double-diamond process: building the right thing. That means plenty of emphasis on research.

The second day is about creation and execution. It’s like that second diamond: building the thing right. This could cover potentially everything but this year the focus will be on content design.

The third day is like the third diamond in the double dia— no, wait. The third day is about growing, scaling, and maintaining design. That means there’ll be quite an emphasis on topics like design systems and design engineering, maybe design ops.

But none of the days will be exclusively about a single topic. There are evergreen topics that apply throughout the process: product design, design ethics, inclusive design.

It’s a lot to juggle! But I’m managing to overcome choice paralysis and assemble a very exciting line-up indeed. Trust me—you won’t want to miss this!

Early bird tickets are available until February 28th. That’s just a few days away. I recommend getting your tickets now—you won’t regret it!

Quite a few people are bringing their entire teams, which is perfect. UX London can be both an educational experience and a team-bonding exercise. Let’s face it, it’s been too long since any of us have had a good off-site.

If you’re one of those lucky people who’s coming along (or if you’re planning to), I’m curious: given the themes mentioned above, are there specific topics that you’d hope to see covered? Drop me a line and let me know.

Also, if you read the description of the event and think “Oh, I know the perfect speaker!” then I’d love to hear from you. Maybe that speaker is you. (Although, cards on the table; if you look like me—another middle-age white man—I may take some convincing.)

Right. Time to get back to my crazy wall of conference curation.

02022-02-22

Eleven years ago, I made a prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

One year later, Matt called me on it and the prediction officially became a bet:

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

I’m very happy to lose this bet.

When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.

The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.

These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.

One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.

I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.

The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Design research on the Clearleft podcast

We’re halfway through the third season of the Clearleft podcast already!

Episode three is all about design research. I like the narrative structure of this. It’s a bit like a whodunnit, but it’s more like a whydunnit. The “why” question is “why aren’t companies hiring more researchers?”

The scene of the crime is this year’s UX Fest, where talks by both Teresa Torres and Gregg Bernstein uncovered the shocking lack of researchers. From there, I take up the investigation with Maite Otondo and Stephanie Troeth.

I won’t spoil it but by the end there’s an answer to the mystery.

I learned a lot along the way too. I realised how many axes of research there are. There’s qualitative research (stories, emotion, and context) and then there’s quantitative research (volume and data). But there’s also evualative research (testing a hyphothesis) and generative research (exploring a problem space before creating a solution). By my count that gives four possible combos: qualitative evaluative research, quantitative evaluative research, qualitative generative research, and quantitative generative research. Phew!

Steph was a terrific guest. Only a fraction of our conversation made it into the episode, but we chatted for ages.

And Maite kind of blew my mind too, especially when she was talking about the relationship between research and design and she said:

Research is about the present and design is about the future.

🤯

I’m going to use that quote again in a future episode. In fact, this episode on design research leads directly into the next two episodes. You won’t want to miss them. So if you’re not already subscribed to the Clearleft podcast, you should get on that, whether it’s via the RSS feed, Apple, Google, Spotify, Overcast, or wherever you get your podcasts from.

Have a listen to this episode on design research and if you’re a researcher yourself, remember that unlike most companies we value research at Clearleft and that’s why we’re hiring another researcher right now. Come and work with us!

A Few Notes on A Few Notes on The Culture

When I post a link, I do it for two reasons.

First of all, it’s me pointing at something and saying “Check this out!”

Secondly, it’s a way for me to stash something away that I might want to return to. I tag all my links so when I need to find one again, I just need to think “Now what would past me have tagged it with?” Then I type the appropriate URL: adactio.com/links/tags/whatever

There are some links that I return to again and again.

Back in 2008, I linked to a document called A Few Notes on The Culture. It’s a copy of a post by Iain M Banks to a newsgroup back in 1994.

Alas, that link is dead. Linkrot, innit?

But in 2013 I linked to the same document on a different domain. That link still works even though I believe it was first published around twenty(!) years ago (view source for some pre-CSS markup nostalgia).

Anyway, A Few Notes On The Culture is a fascinating look at the world-building of Iain M Banks’s Culture novels. He talks about the in-world engineering, education, biology, and belief system of his imagined utopia. The part that sticks in my mind is when he talks about economics:

Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive - and more morally desirable - than one left to market forces.

The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is — without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset — intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.

It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.

Those three paragraphs might be the most succinct critique of unfettered capitalism I’ve come across. The invisible hand as a paperclip maximiser.

Like I said, it’s a fascinating document. In fact I realised that I should probably store a copy of it for myself.

I have a section of my site called “extras” where I dump miscellaneous stuff. Most of it is unlinked. It’s mostly for my own benefit. That’s where I’ve put my copy of A Few Notes On The Culture.

Here’s a funny thing …for all the times that I’ve revisited the link, I never knew anything about the site is was hosted on—vavatch.co.uk—so this most recent time, I did a bit of clicking around. Clearly it’s the personal website of a sci-fi-loving college student from the early 2000s. But what came as a revelation to me was that the site belonged to …Adrian Hon!

I’m impressed that he kept his old website up even after moving over to the domain mssv.net, founding Six To Start, and writing A History Of The Future In 100 Objects. That’s a great snackable book, by the way. Well worth a read.

Hope

My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.

I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.

Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:

Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.

In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.

But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.

As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.

If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.

And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.

In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.

If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.

I like that.

Ampvisory

I was very inspired by something Terence Eden wrote on his blog last year. A report from the AMP Advisory Committee Meeting:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP.

Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.

So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:

Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.

The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.

That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!

But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?

The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.

That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.

I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:

An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.

The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.

Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.

So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.

I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).

The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.

I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.

That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.

After all, AMP is a collection of web components. Maybe.