Journal tags: links

39

sparkline

Accessibility audits for all

It’s often said that it’s easier to make a fast website than it is to keep a website fast. Things slip through. If you’re not vigilant, performance can erode without you noticing.

It’s a similar story for other invisible but important facets of your website: privacy, security, accessibility. Because they’re hidden from view, you won’t be able to see if there’s a regression.

That’s why it’s a good idea to have regular audits for performance, privacy, security, and accessibility.

I wrote about accessibility testing a while back, and how there’s quite a bit that you can do for yourself before calling in an expert to look at the really gnarly stuff:

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

I recently did an internal audit of the Clearleft website. After writing up the report, I also did a lunch’n’learn to share my methodology. I wanted to show that there’s some low-hanging fruit that pretty much anyone can catch.

To start with, there’s keyboard navigation. Put your mouse and trackpad to one side and use the tab key to navigate around.

Caveat: depending on what browser you’re using, you might need to update some preferences for keyboard navigation to work on links. If you’re using Safari, go to “Preferences”, then “Advanced”, and tick “Press Tab to highlight each item on a web page.”

Tab around and find out. You should see some nice chunky :focus-visible styles on links and form fields.

Here’s something else that anyone can do: zoom in. Increase the magnification to 200%. Everything should scale proportionally. How about 500%? You’ll probably see a mobile-friendly layout. That’s fine. As long as nothing is broken or overlapping, you’re good.

At this point, I reach for some tools. I’ve got some bookmarklets that do similar things: tota11y and ANDI. They both examine the source HTML and CSS to generate reports on structure, headings, images, forms, and so on.

These tools are really useful, but you need to be able to interpret the results. For example, a tool can tell you if an image has no alt text. But it can’t tell you if an image has good or bad alt text.

Likewise, these tools are great for catching colour-contrast issues. But there’s a big difference between a colour-contrast issue on the body copy compared to a colour-contrast issue on one unimportant page element.

I think that demonstrates the most important aspect of any audit: prioritisation.

Finding out that you have accessibility issues isn’t that useful if they’re all presented as an undifferentiated list. What you really need to know are which issues are the most important to fix.

By the way, I really like the way that the Gov.uk team prioritises accessibility concerns:

The team puts accessibility concerns in 2 categories:

  1. Theoretical: A question or statement regarding the accessibility of an implementation within the Design System without evidence of real-world impact.
  2. Evidenced: Sharing new research, data or evidence showing that an implementation within the Design System could cause barriers for disabled people.

The team will usually prioritise evidenced issues and queries over theoretical ones.

When I wrote up my audit for the Clearleft website, I structured it in order of priority. The most important things to fix are at the start of the audit. I also used a simple scale for classifying the severity of issues: low, medium, and high priority.

Thankfully there were no high-priority issues. There were a couple of medium-priority issues. There were plenty of low-priority issues. That’s okay. That’s a pretty good distribution.

If you’re interested, here’s the report I delivered…

Accessibility audit on clearleft.com

Colour contrast

There are a few issues with the pink colour. When it’s used on a grey background, or when it’s used as a background colour for white text, the colour contrast isn’t high enough.

The SVG arrow icon could be improved too.

Recommendations

Medium priority
  • Change the pink colour universally to be darker. The custom property --red is currently rgb(234, 33, 90). Change it to rgb(210, 20, 73) (thanks, James!)
Low priority
  • The SVG arrow icon currently uses currentColor. Consider hardcoding solid black (or a very, very dark grey) instead.

Images

Alt text is improving on the site. There’s reasonable alt text at the top level pages and the first screen’s worth of case studies and blog posts. I made a sweep through these pages a while back to improve the alt text but I haven’t done older blog posts and case studies.

Recommendations

Medium priority
  • Make a sweep of older blog posts and case studies and fix alt text.
Low priority
  • Images on the contact page have alt text that starts with “A photo of…” — this is redundant and can be removed.

Headings

The site is using headings sensibly. Sometimes the nesting of headings isn’t perfect, but this is a low priority issue. For example, on the contact page there’s an h1 followed by two h3s. In theory this isn’t correct. In practice (for screen reader users) it’s not an issue.

Recommendations

Low priority
  • On the home page, “UX London 2023” should probably be h3 instead of h1.
  • On the case studies index page we’re currently using h3 headings for the industry sector (“Charities”, “Education” etc.) but these should probably not be headings at all. On the blog index page we use a class “Tags” for a similar purpose. Consider reusing that pattern on the case studies index page.
  • On the about index page, “We’re driven to be” is an h3 and the subsequent three headings are h2s. Ideally this would be reversed: a single h2 followed by three h3s.

Link text

Sometimes the same text is used for different links.

Recommendations

Low priority
  • On the home page the text “Read the case study” is re-used for multiple links. It would be better if each link were different e.g. “Read about The Natural History Museum.”

Forms

The only form on the site is the newsletter sign-up form. It’s marked up pretty well: the input has an associated label, although a visible (clickable) label would be better.

Tabbing order

The site doesn’t use JavaScript to mess with tabbing order for keyboard users. The source order of elements in the markup generally makes sense so all is good.

The focus styles are nice and clear too!

Structure

The site is using HTML landmark elements sensibly (header, nav, main, footer, etc.).

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

Links for Declarative Design

At the end of next week, I will sally forth to California. I’m going to wend my way to San Francisco where I will be speaking at An Event Apart.

I am very much looking forward to speaking at my first in-person AEAs in exactly three years. That was also in San Francisco, right before The Situation.

I hope to see you there. There are still tickets available.

I’ve put together a brand new talk that I’m very excited about. I’ve already written about the prep for this talk:

So while I’ve been feeling somewhat under the gun as I’ve been preparing this new talk for An Event Apart, I’ve also been feeling that the talk is just the culmination; a way of tying together some stuff I’ve been writing about it here for the past year or two.

The talk is called Declarative Design. Here’s the blurb:

Different browsers, different devices, different network speeds…designing for the web can feel like a never-ending battle for control. But what if the solution is to relinquish control? Instead of battling the unknowns, we can lean into them. In the world of programming, there’s the idea of declarative languages: describing what you want to achieve without specifying the exact steps to get there. In this talk, we’ll take this concept of declarative programming and apply it to designing for the web. Instead of focusing on controlling the outputs of the design process, we’ll look at creating the right inputs instead. Leave the final calculations for the outputs to the browser—that’s what computers are good at. We’ll look at CSS features, design systems, design principles, and more. Then you’ll be ready to embrace the fluid, ever-changing, glorious messiness of the World Wide Web!

If you’d a glimpse into the inside of my head while I’ve been preparing this talk, here’s a linkdump of various resources that are either mentioned in the talk or influenced it…

Declarative Design

HTML

CSS

Design Tools

Design systems

History

People

Alternative stylesheets

My website has different themes you can choose from. I don’t just mean a dark mode. These themes all look very different from one another.

I assume that 99.99% of people just see the default theme, but I keep the others around anyway. Offering different themes was originally intended as a way of showcasing the power of CSS, and specifically the separation of concerns between structure and presentation. I started doing this before the CSS Zen Garden was created. Dave really took it to the next level by showing how the same HTML document could be styled in an infinite number of ways.

Each theme has its own stylesheet. I’ve got a very simple little style switcher on every page of my site. Selecting a different theme triggers a page refresh with the new styles applied and sets a cookie to remember your preference.

I also list out the available stylesheets in the head of every page using link elements that have rel values of alternate and stylesheet together. Each link element also has a title attribute with the name of the theme. That’s the standard way to specify alternative stylesheets.

In Firefox you can switch between the specified stylesheets from the View menu by selecting Page Style (notice that there’s also a No style option—very handy for checking your document structure).

Other browsers like Chrome and Safari don’t do anything with the alternative stylesheets. But they don’t ignore them.

Every browser makes a network request for each alternative stylesheet. The request is non-blocking and seems to be low priority, which is good, but I’m somewhat perplexed by the network request being made at all.

I get why Firefox is requesting those stylesheets. It’s similar to requesting a print stylesheet. Even if the network were to drop, you still want those styles available to the user.

But I can’t think of any reason why Chrome or Safari would download the alternative stylesheets.

Speaking at CSS Day 2022

I’m very excited about speaking at CSS Day this year. My talk is called In And Out Of Style:

It’s an exciting time for CSS! It feels like new features are being added every day. And yet, through it all, CSS has managed to remain an accessible language for anyone making websites. Is this an inevitable part of the design of CSS? Or has CSS been formed by chance? Let’s take a look at the history—and some alternative histories—of the World Wide Web to better understand where we are today. And then, let’s cast our gaze to the future!

Technically, CSS Day won’t be the first outing for this talk but it will be the in-person debut. I had the chance to give the talk online last week at An Event Apart. Giving a talk online isn’t quite the same as speaking on stage, but I got enough feedback from the attendees that I’m feeling confident about giving the talk in Amsterdam. It went down well with the audience at An Event Apart.

If the description has you intrigued, come along to CSS Day to hear the talk in person. And if you like the subject matter, I’ve put together these links to go with the talk…

Blog posts

Presentations

Proposals (email)

Papers (PDF)

People (Wikipedia)

A bug with progressive web apps on iOS

Dave recently wrote some good advice about what to do—and what not to do—when it comes to complaining about web browsers. I wrote something on this topic a little while back:

If there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it

To summarise Dave’s advice, avoid conspiracy theories and snark; stick to specifics instead.

It’s very good advice that I should heed (especially the bit about avoiding snark). In that spirit, I’d like to document what I think is a bug on iOS.

I don’t need to name the specific browser, because there is basically only one browser allowed on iOS. That’s not snark; that’s a statement of fact.

This bug involves navigating from a progressive web app that has been installed on your home screen to an external web view.

To illustrate the bug, I’ll use the example of The Session. If you want to recreate the bug, you’ll need to have an account on The Session. Let me know if you want to set up a temporary account—I can take care of deleting it afterwards.

Here are the steps:

  1. Navigate to thesession.org in Safari on an iOS device.
  2. Add the site to your home screen.
  3. Open the installed site from your home screen—it will launch in standalone mode.
  4. Log in with your username and password.
  5. Using the site menu, navigate to the links section of the site.
  6. Click on any external link.
  7. After the external link opens in a web view, tap on “Done” to close the web view.

Expected behaviour: you are returned to the page you were on with no change of state.

Actual behaviour: you are returned to the page you were on but you are logged out.

So the act of visiting an external link in a web view while in a progressive web app in standalone mode seems to cause a loss of cookie-based authentication.

This isn’t permanent. Clicking on any internal link restores the logged-in state.

It is surprising though. My mental model for opening an external link in a web view is that it sits “above” the progressive web app, which remains in stasis “behind” it. But the page must actually be reloading, either when the web view is opened or when the web view is closed. And that reload is behaving like a fetch event without credentials.

Anyway, that’s my bug report. It may already be listed somewhere on the WebKit Bugzilla but I lack the deductive skills to find it. I’m not even sure if that’s the right place for this kind of bug. It might be specific to the operating system rather than the rendering engine.

This isn’t a high priority bug, but it is one of those cumulatively annoying software paper cuts.

Hope this helps!

Today, the distant future

It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.

Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.

In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.

It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.

The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.

But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).

The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:

God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.

That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).

I had a different reaction to Jeff, as I wrote in 2010:

Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.

But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:

If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.

Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?

(I’m re-reading that article in the current version of Firefox: 95.0.2.)

Brian Veloso wrote on his site:

Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?

From the comments on Jeff’s post, there’s Corey Dutson:

2022: God knows what the Internet will look like at that point. Will we even have websites?

Dan Rubin, who has indeed successfully moved from web work to photography, wrote:

I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.

Joshua Works made a prediction that’s worryingly close to reality:

I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.

Someone with the moniker Grand Caveman wrote:

In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.

Perhaps the most level-headed observation came from Jonny Axelsson:

The world in 2022 will be pretty much like the world in 2009.

The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.

The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.

Now that’s a sensible perspective!

So who else is looking forward to seeing what the World Wide Web is like in 2036?

I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.

Speaking of long bets…

Writing the Clearleft newsletter

The Clearleft newsletter goes out every two weeks on a Thursday. You can peruse the archive to see past editions.

I think it’s a really good newsletter, but then again, I’m the one who writes it. It just kind of worked out that way. In theory, anyone at Clearleft could write an edition of the newsletter.

To make that prospect less intimidating, I put together a document for my colleagues describing how I go about creating a new edition of the newsletter. Then I thought it might be interesting for other people outside of Clearleft to get a peek at how the sausage is made.

So here’s what I wrote…

Topics

The description of the newsletter is:

A round-up of handpicked hyperlinks from Clearleft, covering design, technology, and culture.

It usually has three links (maybe four, tops) on a single topic.

The topic can be anything that’s interesting, especially if it’s related to design or technology. Every now and then the topic can be something that incorporates an item that’s specifically Clearleft-related (a case study, an event, a job opening). In general it’s not very salesy at all so people will tolerate the occasional plug.

You can use the “iiiinteresting” Slack channel to find potential topics of interest. I’ve gotten in the habit of popping potential newsletter fodder in there, and then adding related links in a thread.

Tone

Imagine you’re telling a friend about something cool you’ve just discovered. You can sound excited. Don’t worry about this looking unprofessional—it’s better to come across as enthusiastic than too robotic. You can put real feelings on display: anger, disappointment, happiness.

That said, you can also just stick to the facts and describe each link in turn, letting the content speak for itself.

If you’re expressing a feeling or an opinion, use the personal pronoun “I”. Don’t use “we” unless you’re specifically referring to Clearleft.

But most of the time, you won’t be using any pronouns at all:

So-and-so has written an article in such-and-such magazine on this-particular-topic.

You might find it useful to have connecting phrases as you move from link to link:

Speaking of some-specific-thing, this-other-person has a different viewpoint.

or

On the subject of this-particular-topic, so-and-so wrote something about this a while back.

Structure

The format of the newsletter is:

  1. An introductory sentence or short paragraph.
  2. A sentence describing the first link, ending with the title of the item in bold.
  3. A link to the item on its own separate line.
  4. An excerpt from the link, usually a sentence or two, styled as a quote.
  5. Repeat steps 2 to 4 another two times.


Take a look through the archive of previous newsletters to get a feel for it.

Subject line

Currently the newsletter is called dConstruct from Clearleft. The subject line of every edition is in the format:

dConstruct from Clearleft — Title of the edition

(Note that’s an em dash with a space on either side of it separating the name of the newsletter and the title of the edition)

I often try to come up with a pun-based title (often a punny portmanteau) but that’s not necessary. It should be nice and short though: just one or two words.

A Few Notes on A Few Notes on The Culture

When I post a link, I do it for two reasons.

First of all, it’s me pointing at something and saying “Check this out!”

Secondly, it’s a way for me to stash something away that I might want to return to. I tag all my links so when I need to find one again, I just need to think “Now what would past me have tagged it with?” Then I type the appropriate URL: adactio.com/links/tags/whatever

There are some links that I return to again and again.

Back in 2008, I linked to a document called A Few Notes on The Culture. It’s a copy of a post by Iain M Banks to a newsgroup back in 1994.

Alas, that link is dead. Linkrot, innit?

But in 2013 I linked to the same document on a different domain. That link still works even though I believe it was first published around twenty(!) years ago (view source for some pre-CSS markup nostalgia).

Anyway, A Few Notes On The Culture is a fascinating look at the world-building of Iain M Banks’s Culture novels. He talks about the in-world engineering, education, biology, and belief system of his imagined utopia. The part that sticks in my mind is when he talks about economics:

Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive - and more morally desirable - than one left to market forces.

The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is — without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset — intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.

It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.

Those three paragraphs might be the most succinct critique of unfettered capitalism I’ve come across. The invisible hand as a paperclip maximiser.

Like I said, it’s a fascinating document. In fact I realised that I should probably store a copy of it for myself.

I have a section of my site called “extras” where I dump miscellaneous stuff. Most of it is unlinked. It’s mostly for my own benefit. That’s where I’ve put my copy of A Few Notes On The Culture.

Here’s a funny thing …for all the times that I’ve revisited the link, I never knew anything about the site is was hosted on—vavatch.co.uk—so this most recent time, I did a bit of clicking around. Clearly it’s the personal website of a sci-fi-loving college student from the early 2000s. But what came as a revelation to me was that the site belonged to …Adrian Hon!

I’m impressed that he kept his old website up even after moving over to the domain mssv.net, founding Six To Start, and writing A History Of The Future In 100 Objects. That’s a great snackable book, by the way. Well worth a read.

Hope

My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.

I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.

Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:

Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.

In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.

But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.

As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.

If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.

And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.

In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.

If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.

I like that.

The State of the Web — the links

An Event Apart Spring Summit is happening right now. I opened the show yesterday with a talk called The State Of The Web:

The World Wide Web has come a long way in its three decades of existence. There’s so much we can do now with HTML, CSS, and JavaScript: animation, layout, powerful APIs… we can even make websites that work offline! And yet the web isn’t exactly looking rosy right now. The problems we face aren’t technical in nature. We’re facing a crisis of expectations: we’ve convinced people that the web is slow, buggy, and inaccessible. But it doesn’t have to be this way. There is no fate but what we make. In this perspective-setting talk, we’ll go on a journey to the past, present, and future of web design and development. You’ll laugh, you’ll cry, and by the end, you’ll be ready to make the web better.

I wrote about preparing this talk and you can see the outline on Kinopio. I thought it turned out well, but I never actually know until people see it. So I’m very gratified and relieved that it went down very well indeed. Phew!

Eric and the gang at An Event Apart asked for a round-up of links related to this talk and I was more than happy to oblige. I’ve separated them into some of the same categories that the talk covers.

I know that these look like a completely disconnected grab-bag of concepts—you’d have to see the talk to get the connections. But even without context, these are some rabbit holes you can dive down…

Apollo 8

Hypertext

The World Wide Web

NASA

This (somewhat epic) slidedeck is done.

Associative trails

Matt wrote recently about how different writers keep notes:

I’m also reminded of how writers I love and respect maintain their own reservoirs of knowledge, complete with migratory paths down from the mountains.

I have a section of my site called “notes” but the truth is that every single thing I post on here—whether it’s a link, a blog post, or anything else—is really a “note to self.”

When it comes to retrieving information from this online memex of mine, I use tags. I’ve got search forms on my site, but usually I’ll go to the address bar in my browser instead and think “now, what would past me have tagged that with…” as I type adactio.com/tags/... (or, if I want to be more specific, adactio.com/links/tags/... or adactio.com/journal/tags/...).

It’s very satisfying to use my website as a back-up brain like this. I can get stuff out of my head and squirreled away, but still have it available for quick recall when I want it. It’s especially satisfying when I’m talking to someone else and something they say reminds me of something relevant, and I can go “Oh, let me send you this link…” as I retrieve the tagged item in question.

But I don’t think about other people when I’m adding something to my website. My audience is myself.

I know there’s lots of advice out there about considering your audience when you write, but when it comes to my personal site, I’d find that crippling. It would be one more admonishment from the inner critic whispering “no one’s interested in that”, “you have nothing new to add to this topic”, and “you’re not quailified to write about this.” If I’m writing for myself, then it’s easier to have fewer inhibitions. By treating everything as a scrappy note-to-self, I can avoid agonising about quality control …although I still spend far too long trying to come up with titles for posts.

I’ve noticed—and other bloggers have corroborated this—there’s no correlation whatsover between the amount of time you put into something and how much it’s going to resonate with people. You might spend days putting together a thoroughly-researched article only to have it met with tumbleweeds when you finally publish it. Or you might bash something out late at night after a few beers only to find it on the front page of various aggregators the next morning.

If someone else gets some value from a quick blog post that I dash off here, that’s always a pleasant surprise. It’s a bonus. But it’s not my reason for writing. My website is primarily a tool and a library for myself. It just happens to also be public.

I’m pretty sure that nobody but me uses the tags I add to my links and blog posts, and that’s fine with me. It’s very much a folksonomy.

Likewise, there’s a feature I added to my blog posts recently that is probably only of interest to me. Under each blog post, there’s a heading saying “Previously on this day” followed by links to any blog posts published on the same date in previous years. I find it absolutely fascinating to spelunk down those hyperlink potholes, but I’m sure for anyone else it’s about as interesting as a slideshow of holiday photos.

Matt took this further by adding an “on this day” URL to his site. What a great idea! I’ve now done the same here:

adactio.com/archive/onthisday

That URL is almost certainly only of interest to me. And that’s fine.

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Bookshop

Back at the start of the (first) lockdown, I wrote about using my website as an outlet:

While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

Last week was eventful and stressful. For everyone. I found myself once again taking refuge in my website, tinkering with its inner workings in the way that someone else would potter about in their shed or take to their garage to strip down the engine of some automotive device.

Colly drew my attention to Bookshop.org, newly launched in the UK. It’s an umbrella website for independent bookshops to sell through. It’s also got an affiliate scheme, much like Amazon. I set up a Bookshop page for myself.

I’ve been tracking the books I’m reading for the past three years here on my own website. I set about reproducing that list on Bookshop.

It was exactly the kind of not-exactly-mindless but definitely-not-challenging task that was perfect for the state of my brain last week. Search for a book; find the ISBN number; paste that number into a form. It’s the kind of task that a real programmer would immediately set about automating but one that I embraced as a kind of menial task to keep me occupied.

I wasn’t able to get a one-to-one match between the list on my site and my reading list on Bookshop. Some titles aren’t available in the online catalogue. For example, the book I’m reading right now—A Paradise Built in Hell by Rebecca Solnit—is nowhere to be found, which seems like an odd omission.

But most of the books I’ve read are there on Bookshop.org, complete with pretty book covers. Then I decided to reverse the process of my menial task. I took all of the ISBN numbers from Bookshop and add them as machine tags to my reading notes here on my own website. Book cover images on Bookshop have predictable URLs that use the ISBN number (well, technically the EAN number, or ISBN-13, but let’s not go down a 927 rabbit hole here). So now I’m using that metadata to pull in images from Bookshop.org to illustrate my reading notes here on adactio.com.

I’m linking to the corresponding book on Bookshop.org using this URL structure:

https://uk.bookshop.org/a/{{ affiliate code }}/{{ ISBN number }}

I realised that I could also link to the corresponding entry on Open Library using this URL structure:

https://openlibrary.org/isbn/{{ ISBN number }}

Here, for example, is my note for The Raven Tower by Ann Leckie. That entry has a tag:

book:ean=9780356506999

With that information I can illustrate my note with this image:

https://images-eu.bookshop.org/product-images/images/9780356506999.jpg

I’m linking off to this URL on Bookshop.org:

https://uk.bookshop.org/a/980/9780356506999

And this URL on Open Library:

https://openlibrary.org/isbn/9780356506999

The end result is that my reading list now has more links and pretty pictures.

Oh, I also set up a couple of shorter lists on Bookshop.org:

The books listed in those are drawn from my end of the year round-ups when I try to pick one favourite non-fiction book and one favourite work of fiction (almost always speculative fiction). The books in those two lists are the ones that get two hearty thumbs up from me. If you click through to buy one of them, the price might not be as cheap as on Amazon, but you’ll be supporting an independent bookshop.

Accessible interactions

Accessibility on the web is easy. Accessibility on the web is also hard.

I think it’s one of those 80/20 situations. The most common accessibility problems turn out to be very low-hanging fruit. Take, for example, Holly Tuke’s list of the 5 most annoying website features she faces as a blind person every single day:

  • Unlabelled links and buttons
  • No image descriptions
  • Poor use of headings
  • Inaccessible web forms
  • Auto-playing audio and video

None of those problems are hard to fix. That’s what I mean when I say that accessibility on the web is easy. As long as you’re providing a logical page structure with sensible headings, associating form fields with labels, and providing alt text for images, you’re at least 80% of the way there (you’re also doing way better than the majority of websites, sadly).

Ah, but that last 20% or so—that’s where things get tricky. Instead of easy-to-follow rules (“Always provide alt text”, “Always label form fields”, “Use sensible heading levels”), you enter an area of uncertainty and doubt where there are no clear answers. Different combinations of screen readers, browsers, and operating systems might yield very different results.

This is the domain of interaction design. Here be dragons. ARIA can help you …but if you overuse its power, it may cause more harm than good.

When I start to feel overwhelmed by this, I find it’s helpful to take a step back. Instead of trying to imagine all the possible permutations of screen readers and browsers, I start with a more straightforward use case: keyboard users. Keyboard users are (usually) a subset of screen reader users.

The pattern that comes up the most is to do with toggling content. I suppose you could categorise this as progressive disclosure, but I’m talking about quite a wide range of patterns:

  • accordions,
  • menus (including mega menu monstrosities),
  • modal dialogs,
  • tabs.

In each case, there’s some kind of “trigger” that toggles the appearance of a “target”—some chunk of content.

The first question I ask myself is whether the trigger should be a button or a link (at the very least you can narrow it down to that shortlist—you can discount divs, spans, and most other elements immediately; use a trigger that’s focusable and interactive by default).

As is so often the case, the answer is “it depends”, but generally you can’t go wrong with a button. It’s an element designed for general-purpose interactivity. It carries the expectation that when it’s activated, something somewhere happens. That’s certainly true in all the examples I’ve listed above.

That said, I think that links can also make sense in certain situations. It’s related to the second question I ask myself: should the target automatically receive focus?

Again, the answer is “it depends”, but here’s the litmus test I give myself: how far away from each other are the trigger and the target?

If the target content is right after the trigger in the DOM, then a button is almost certainly the right element to use for the trigger. And you probably don’t need to automatically focus the target when the trigger is activated: the content already flows nicely.

<button>Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But if the target is far away from the trigger in the DOM, I often find myself using a good old-fashioned hyperlink with a fragment identifier.

<a href="#target">Trigger Text</a>
…
<div id="target">
<p>Target content.</p>
</div>

Let’s say I’ve got a “log in” link in the main navigation. But it doesn’t go to a separate page. The design shows it popping open a modal window. In this case, the markup for the log-in form might be right at the bottom of the page. This is when I think there’s a reasonable argument for using a link. If, for any reason, the JavaScript fails, the link still works. But if the JavaScript executes, then I can hijack that link and show the form in a modal window. I’ll almost certainly want to automatically focus the form when it appears.

The expectation with links (as opposed to buttons) is that you will be taken somewhere. Let’s face it, modal dialogs are like fake web pages so following through on that expectation makes sense in this context.

So I can answer my first two questions:

  • “Should the trigger be a link or button?” and
  • “Should the target be automatically focused?”

…by answering a different question:

  • “How far away from each other are the trigger and the target?”

It’s not a hard and fast rule, but it helps me out when I’m unsure.

At this point I can write some JavaScript to make sure that both keyboard and mouse users can interact with the interactive component. There’ll certainly be an addEventListener(), some tabindex action, and maybe a focus() method.

Now I can start to think about making sure screen reader users aren’t getting left out. At the very least, I can toggle an aria-expanded attribute on the trigger that corresponds to whether the target is being shown or not. I can also toggle an aria-hidden attribute on the target.

When the target isn’t being shown:

  • the trigger has aria-expanded="false",
  • the target has aria-hidden="true".

When the target is shown:

  • the trigger has aria-expanded="true",
  • the target has aria-hidden="false".

There’s also an aria-controls attribute that allows me to explicitly associate the trigger and the target:

<button aria-controls="target">Trigger Text</button>
<div id="target">
<p>Target content.</p>
</div>

But don’t assume that’s going to help you. As Heydon put it, aria-controls is poop. Still, Léonie points out that you can still go ahead and use it. Personally, I find it a useful “hook” to use in my JavaScript so I know which target is controlled by which trigger.

Here’s some example code I wrote a while back. And here are some old Codepens I made that use this pattern: one with a button and one with a link. See the difference? In the example with a link, the target automatically receives focus. But in this situation, I’d choose the example with a button because the trigger and target are close to each other in the DOM.

At this point, I’ve probably reached the limits of what can be abstracted into a single trigger/target pattern. Depending on the specific component, there might be much more work to do. If it’s a modal dialog, for example, you’ve got to figure out where to put the focus, how to trap the focus, and figure out where the focus should return to when the modal dialog is closed.

I’ve mostly been talking about websites that have some interactive components. If you’re building a single page app, then pretty much every single interaction needs to be made accessible. Good luck with that. (Pro tip: consider not building a single page app—let the browser do what it has been designed to do.)

Anyway, I hope this little stroll through my thought process is useful. If nothing else, it shows how I attempt to cope with an accessibility landscape that looks daunting and ever-changing. Remember though, the fact that you’re even considering this stuff means you care more than most web developers. And you are not alone. There are smart people out there sharing what they learn. The A11y Project is a great hub for finding resources.

And when it comes to interactive patterns like the trigger/target examples I’ve been talking about, there’s one more question I ask myself: what would Heydon do?

Design Principles For The Web—the links

I’m speaking today at an online edition of An Event Apart called Front-End Focus. I’ll be opening up the show with a talk called Design Principles For The Web, which ironically doesn’t have much of a front-end focus:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

That explains why I’ve been writing so much about design principles …well, that and the fact that I’m mildly obsessed with them.

To avoid technical difficulties, I’ve pre-recorded the talk. So while that’s playing, I’ll be spamming the accompanying chat window with related links. Then I’ll do a live Q&A.

Should you be interested in the links that I’ll be bombarding the attendees with, I’ve gathered them here in one place (and they’re also on the website of An Event Apart). The narrative structure of the talk might not be clear from scanning down a list of links, but there’s some good stuff here that you can dive into if you want to know what the inside of my head is like.

References

adactio.com

Wikipedia

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Building links

In just over a week, I’ll be giving the opening talk at the New Adventures conference in Nottingham. I’ll be giving a workshop the day before too. There are still tickets available for both.

I have to admit, I’m kind of nervous about this talk. It’s been quite a while since the last New Adventures, but it’s always had quite the cachet. I think I went to most of them. It’s quite strange—and quite an honour—to shift gears from attendee to speaker.

The talk I’ll be giving is called Building. That might be a noun. That might be a verb. You decide:

Every new medium looks to what has come before for guidance. Web design has taken cues from centuries of typography and graphic design. Web development has borrowed metaphors and ideas from the world of architecture. Let’s take a tour of some of the most influential ideas from architecture that have crossed over into the web, from pattern languages to responsive design. Together we’ll uncover how to build resilient, performant, accessible and beautiful structures that work with the grain of the materials of the web.

This talk builds upon the talk I gave at last year’s An Event Apart called The Way Of The Web. It also reflects many of the ideas in Resilient Web Design. When I gave a run-through of the talk at Clearleft last week, Andy called it a “greatest hits.” For a while there, I was feeling guilty about retreading some ground I’ve covered in previous talks and writings. Then I realised it was pretty arrogant of me to think that anyone in the audience would be familiar with any of it.

Besides, I’ve got a whole new avenue of exploration in this talk. It’s about language and metaphor—how we talk about what we do on the web. I’ve just finished giving another run-through at the Clearleft studio and I’m feeling pretty good about it. That’s good, because I find that giving a talk in a small room to a handful of colleagues is way more stressful than giving a talk to hundreds of people at a conference.

Just as I put together links related to last year’s talk, I figured I’d provide some hyperlinks for anyone interested in the topics raised in this new talk…

Books

Articles

Audio

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…